KubeCon + CloudNativeCon EU 2026 took place in Amsterdam last week. Together with Vahan and Daniel, I went to take the pulse of the cloud-native ecosystem and evaluate what’s coming down the line for the Kubernetes clusters and platforms we manage.
Here’s what stood out and what we’re doing about it.
AI is dominating the conversation
This was impossible to miss. The CNCF is going all-in on positioning Kubernetes as a first-class platform for AI training and inference workloads. Keynotes, breakout sessions, and sponsor booths were overwhelmingly focused on AI.
Digital sovereignty and the Cyber Resilience Act
Day two of the keynotes shifted focus from AI to a theme that’s increasingly hard to ignore as well: digital sovereignty.
Several keynotes addressed this head-on. Ericsson’s Jan Melen spoke about regulation, sovereignty, and the future of open collaboration in Europe. SNCF (the French national railway) showed how they rebuilt their entire platform on open-source Kubernetes after finding that managed cloud offerings fell short of their resilience targets — they now run 30% of applications in their own data centres. The German military’s BWI presented their sovereign multi-cloud strategy built on cloud-native technologies. The message was consistent: for critical European infrastructure, relying solely on US hyperscalers is becoming a strategic risk, not just a compliance concern.
The EU’s Cyber Resilience Act (CRA) also got dedicated keynote time, with Linux kernel maintainer Greg Kroah-Hartman framing it not as a regulatory burden but as a necessary “ingredient label” for software. The CRA’s vulnerability reporting requirements kick in September 2026, with full compliance required by December 2027. For organisations that distribute software commercially this means concrete obligations around SBOMs, vulnerability handling, and component documentation.
Policy engines: automating best practices at deploy time
Multiple talks reinforced something we’ve been thinking about for a while: policy engines aren’t just for security compliance anymore. They’re being used to enforce cost optimisation rules, operational standards, and deployment best practices: at admission time, before bad configuration hits production.
Shopify shared a particularly interesting pattern: using OPA to give developers feedback directly on pull requests, catching misconfiguration before it even reaches a cluster. This shifts the feedback loop left, which is exactly where it belongs.
The three main contenders in this space each have distinct tradeoffs:
- Kyverno is the most Kubernetes-native option. It works with familiar YAML, includes built-in dashboarding through its Policy Reporter, and has the lowest barrier to entry. Policies are expressed as Kubernetes resources, which makes them easy to version and deploy through GitOps workflows.
- OPA with Gatekeeper is more powerful and more general. OPA’s Rego language allows complex logic that goes beyond what Kyverno can express, and it works outside of Kubernetes too — across CI pipelines, Terraform plans, and API authorization. The tradeoff is a steeper learning curve. An interesting side note: Gatekeeper’s ability to replicate objects across namespaces could serve as a replacement for Hierarchical Namespace Controller (HNC) in some use cases.
- Kubewarden takes a different approach entirely, packaging policies as OCI images written in any language that compiles to WebAssembly. The community favours it for its flexibility, though the build step makes policies harder to debug than Kyverno’s declarative approach.
For managed Kubernetes providers like us, policy engines represent an opportunity to codify the operational knowledge we’ve built up over years of managing clusters and surface it proactively to the teams deploying on our clusters.
Self-service infrastructure: is Kubernetes still the right interface?
Crossplane and AWS ACK were recurring topics at the conference, particularly in the context of platform engineering and self-service infrastructure provisioning. The premise is straightforward: let developers declare the cloud resources they need (databases, queues, caches) as Kubernetes objects, and let a controller reconcile them into existence.
We’ve been evaluating this approach on our roadmap for a while. But the conversations we had at KubeCon challenged some of our assumptions.
The core question is this: with AI-powered development tools evolving as fast as they are, is writing custom Crossplane Compositions and managing infrastructure through Kubernetes CRDs still the simplest interface to offer? Writing and maintaining Compositions is non-trivial work. The abstraction layer has real value, but the effort to build and maintain it needs to be weighed against alternatives that may be emerging.
This isn’t a settled question. What’s clear is that the “right” self-service interface is shifting, and anyone building a platform engineering strategy needs to be honest about whether their chosen abstraction layer is actually reducing complexity or just moving it.
Running Crossplane on tenant clusters (rather than a central management cluster) also came up. This narrows the functional gap with AWS ACK significantly. At that point the difference is mostly about being cloud native.
Checkpoint/restore: Ctrl-X, Ctrl-V for pods
The Kubernetes Working Group on checkpoint/restore presented their progress on bringing native container checkpointing to Kubernetes (KEP-5823). The idea is exactly what it sounds like: freeze a running container’s state to disk, then restore it elsewhere. potentially on a different node with different resources.
The most interesting application for managed clusters is the combination with VPA (Vertical Pod Autoscaler). Today, vertically scaling a pod means killing it and recreating it with new resource limits. With checkpoint/restore, you could freeze the process, move it to a node with more headroom, and resume without losing in-memory state or forcing a cold start.
This is still very much work-in-progress and not production-ready, but it’s a meaningful evolution in pod lifecycle management worth keeping an eye on.
Alerting enrichment: smarter incident context
One talk on alert fatigue introduced a pattern we found compelling: using an AI reasoning step that watches for alerts via Crossplane operations, investigates the alert context automatically, and annotates the affected resources with its findings before a human ever gets paged.
The idea isn’t to replace human judgment. It’s to ensure that when an engineer gets paged at 3am, the alert comes with context: what was checked, what was ruled out, what looks suspicious. This turns alerts from “something is wrong” into “something is wrong, and here’s what we know so far.”
No turnkey solution exists for this yet, but the architectural pattern to do event-driven investigation with enrichment in the alerts is something we desire. More broadly, this could shift our role from simply forwarding alerts to being proactively helpful: giving customers not just a notification that something is wrong, but automated context and next steps that help them engage with incidents more effectively. We expect tooling in this space to grow and mature soon.
Cost visibility: OpenCost
OpenCost caught our attention as a flexible, open-source option for giving customers direct insight into their Kubernetes spend. It supports multi-cluster setups, breaks down costs by namespace, deployment, and label, and integrates with Prometheus — all things that align well with our existing stack. For teams that want to understand where their cloud budget is going without relying on cloud-provider billing dashboards, this could be a practical addition to our offering.
Gateway API: the migration continues
We recently wrote about our ingress-nginx migration strategy. KubeCon reinforced that this transition is happening across the ecosystem, though the pace varies widely The CNCF released a Gateway API migration wizard to help teams evaluate their options.
What we’re taking away
KubeCon EU 2026 reinforced that the Kubernetes ecosystem is maturing in ways that directly affect how we build and manage platforms. The hype cycle around AI is real, but underneath it, there are practical developments in policy enforcement, GitOps tooling, pod lifecycle management, and self-service infrastructure that will shape the next generation of managed Kubernetes platforms.
For us, the key takeaways are:
- RBAC and access management remain the critical foundation. If you’re planning any kind of self-service, dashboarding, or policy enforcement, this is where to start.
- Policy engines are ready for adoption beyond security compliance. We’re evaluating Kyverno and OPA/Gatekeeper to bring automated best-practice feedback into our managed offering.
- Digital sovereignty: we’re exploring what it would take to build our platform blueprint to an EU-based cloud provider for giving our customers a credible sovereign option.
- Cost visibility is a growing customer need. We noticed OpenCost as an open-source option for giving customers direct insight into their Kubernetes spend.
- Alerting enrichment is how we want to take our monitoring to the next level: moving from forwarding alerts to providing automated context and next steps that help customers engage with incidents more effectively.
- Gateway API adoption is accelerating. Determine a path forward in adopting Gateway API is already on our roadmap. If you haven’t started planning your ingress-nginx migration, we can help.
We’ll be pouring those ideas in some way into our roadmap in the future.
If you’re tackling policy engines, sovereign infrastructure, or self-service platforms, I’d genuinely like to compare notes. Find me on LinkedIn or get in touch with us.

