
2026-01-29
Ingress-nginx is being retired in March 2026. After analyzing over 1,000 ingress resources across our managed clusters, we’ve landed on a hybrid approach: Traefik as the default replacement, with AWS Load Balancer Controller for environments that benefit from tight AWS integration. Here’s what we learned and how we’re moving forward.
In November 2025, Kubernetes SIG Network and the Security Response Committee announced the retirement of ingress-nginx. Best-effort maintenance will continue until March 2026. After that: no further releases, no bugfixes, and critically, no security updates.
For many of us, ingress-nginx has been the reliable workhorse of Kubernetes networking. It’s the Swiss Army knife that handles TLS termination, path-based routing, authentication, rate limiting, and dozens of other features through a sprawling collection of annotations. That flexibility powered countless clusters, from homelabs to massive production deployments. But it came at a cost.
What the Kubernetes community called out directly: the breadth and flexibility that made ingress-nginx popular also made it increasingly difficult to maintain. Features that were once considered helpful options—like the ability to inject arbitrary nginx configuration via snippet annotations—came to be seen as serious security risks. Yesterday’s flexibility became today’s technical debt.
Despite its popularity, the project has struggled with maintainership for years. As the announcement noted, development work often fell to just one or two people, working on their own time after hours and on weekends. Plans to build a replacement controller (InGate) never progressed far enough to create a viable alternative. The decision to retire the project was made to prioritize user safety.
When we first heard the news, we needed to understand exactly what we were dealing with. We wrote scripts to analyze every ingress resource across all the Kubernetes clusters we manage, filtering for those using ingress-nginx IngressClasses.
The results were 1097 unique ingress resources using 89 distinct annotation types.
Here’s what the usage distribution looked like for the most common annotations:
| Annotation | Count | Purpose |
|---|---|---|
| kubernetes.io/tls-acme | 1,097 | Automated TLS certificates |
| ssl-redirect | 881 | HTTPS redirection |
| rewrite-target | 554 | URL path rewriting |
| proxy-body-size | 316 | Upload size limits |
| proxy-read-timeout | 236 | Backend timeouts |
| enable-cors | 147 | CORS headers |
| permanent-redirect | 129 | 301 redirects |
| auth-url / auth-signin | 99 | External authentication |
| limit-rps | 61 | Rate limiting |
| affinity | 60 | Session stickiness |
| server-snippet / configuration-snippet | 56 | Custom nginx config |
This data told us two things. First, TLS and basic routing are universal—every ingress needs them. Second, there’s a long tail of advanced features (CORS, rate limiting, authentication, custom snippets) that significant portions of our customer base depend on.
The custom snippets were particularly concerning. These 56 ingresses contain raw nginx configuration that won’t translate to any other controller. Each one would need manual analysis.
We evaluated five potential replacements:
AWS Load Balancer Controller (ALB): Already part of our stack. Recent versions added native URL rewriting, which had been a major blocker. Tight integration with ACM, WAF, and Cognito. But no native support for CORS or rate limiting—you’d need CloudFront or WAF to fill those gaps.
Traefik: Strong focus on ingress-nginx migration, including experimental support for nginx annotations. Native CORS and rate limiting. Clean middleware architecture. Also supports Gateway API for the long term.
Gateway API (various controllers): The future of Kubernetes ingress. But ecosystem maturity is still limited, and many Helm charts don’t support it yet. Not ready for a forced migration on a tight timeline.
NGINX OSS Ingress (F5): We evaluated this as a potential drop-in replacement. Rejected because critical features like OIDC authentication, session affinity, and proper metrics are paywalled in the commercial offering.
Zalando Skipper: Briefly considered, but not aligned with our requirements or long-term direction.
After running proof-of-concept deployments on our own infrastructure, we landed on a hybrid approach:
Traefik as the default replacement. It covers the broadest compatibility to ingress-nginx features, including CORS and rate limiting functionality that a significant portion of our customers rely on.
AWS Load Balancer Controller as an option for environments that benefit from CloudFront integration, WAF rules, or native AWS authentication via Cognito.
Why not just go all-in on ALB? During our rollout, we found that too much core functionality required bolting on additional AWS services. CORS headers? You need CloudFront with response header policies. Rate limiting? That’s WAF, with its own pricing model. For many workloads, this added complexity and cost that Traefik handles natively.
Why not just Traefik? Because some environments genuinely benefit from the AWS integration. If you’re already using CloudFront for caching and WAF for security, having ALB as your ingress controller creates a cleaner architecture than mixing in a separate proxy layer.
Here’s how the two options stack up against the features we found in active use:
| Feature | ALB | Traefik |
|---|---|---|
| TLS/SSL (ACM, cert-manager) | ✅ | ✅ |
| URL rewriting | ✅ (v2.14.1+) | ✅ |
| Redirects | ✅ | ✅ |
| Authentication (OIDC) | ✅ Native | ✅ ForwardAuth |
| IP whitelisting | ✅ | ✅ |
| Session affinity | ⚠️ IP mode only | ✅ Cookie-based |
| Proxy timeouts | ⚠️ Less granular | ✅ |
| CORS | ❌ (needs CloudFront) | ✅ Native |
| Rate limiting | ❌ (needs WAF) | ✅ Native |
| Custom snippets | ❌ | ⚠️ Middleware |
The bottom line: Traefik covers roughly 90% of use cases out of the box. ALB covers 70-75%, with the rest requiring AWS service additions.
We’ve built Traefik support directly into our kubernetes-stack. Once the feature flag is enabled in a cluster definition, Traefik deploys automatically and platform ingresses (Grafana, Prometheus, etc.) migrate over.
For customer workloads, the migration involves updating the ingressClassName and translating annotations. We’ve documented the most common patterns:
Simple TLS + routing: Minimal changes. Update the IngressClass, ensure your certificate setup works with the new controller.
URL rewriting: Both ALB and Traefik support this, but the syntax differs. Traefik uses middleware; ALB uses transform annotations.
CORS: If you’re on Traefik, define a CORS middleware and reference it. If you’re on ALB, you’ll need CloudFront or handle CORS at the application layer.
Rate limiting: Same pattern—native middleware on Traefik, WAF rules on ALB.
Authentication: ALB has native OIDC/Cognito support. Traefik uses ForwardAuth, typically pointing to an oauth2-proxy or similar.
Custom snippets: These require manual analysis. Some may translate to middleware, others may need application-level changes.
The 56 ingresses using server-snippet or configuration-snippet are our biggest unknown. These contain raw nginx configuration—header manipulation, complex routing logic, security controls, request/response modifications.
There’s no automated path here. This is going to be the grunt of the effort in migrating. Each snippet needs to be:
This is the unglamorous work that makes migrations succeed or fail.
The Kubernetes Gateway API is the long-term direction for ingress. It’s more expressive than the Ingress resource, with better support for traffic splitting, header manipulation, and cross-namespace routing.
Both Traefik and (to a lesser extent) ALB support Gateway API. We’re treating this migration as a stepping stone—move off ingress-nginx now, adopt Gateway API gradually as the ecosystem matures and Helm charts add support.
Good news: Gateway API is on our radar and on our roadmap.
If you’re running ingress-nginx, start planning your migration today. March 2026 sounds far away until you factor in testing, rollback plans, and the inevitable edge cases.
We’ve published detailed migration documentation: