That's a cool docker compose setup and is definitely competitive with a single node k8s deployment I run for hobby projects (k3s). The simplicity of this docker compose setup is an advantage, but it is missing some useful features compared to k8s:
No Let's Encrypt support for Nginx using Cert Manager.
No git ops support, the deployment shell script is manually run.
No management ui's like Lens or k9s.
Will not scale to multiple machines without changing the technology to k8s or Nomad.
That said, I would be happy to run their setup if I hadn't gone through the pain of learning k8s.
Our infrastructure? One server. One docker-compose.yml file. Zero Kubernetes complexity.
If your business/use case can tolerate manually rebuilding a server and redeploying your stack in case of hardware failure, I don't see why we're even discussing kubernetes.
We use nginx-ingress-controller alongside another opt-in, paid solution. Very sad to see that this is how orgs are thinking about open source. Thousands of people are profiting from the foundational work of a few.
The retirement of Ingress NGINX was announced for March 2026, after years of public warnings that the project was in dire need of contributors and maintainers. There will be no more releases for bug fixes, security patches, or any updates of any kind after the project is retired.
This cannot be ignored, brushed off, or left until the last minute to address. We cannot overstate the severity of this situation or the importance of beginning migration to alternatives like Gateway API or one of the many third-party Ingress controllers immediately.
I know it's literally the first paragraph, but I thought it worth commenting for those that only read the title & comments
Well I'll keep tabs on this, we use cnpg and will be good to have this in my back pocket, also liked the steps on recovery as I'll use that in the future.
I asked how long they need the timeout to be. They requested 1 hour.
That's outright insane. Does this mean that if they connection has any type of hiccup, all their work is lost?
Instead of having web apps working directly out of request-response cycle, these long running jobs need to be treated as a proper separate task, which gets a proper record entry in their database and could be queried for the results later.
It depends. If it's an internal facing cluster with little other traffic then it's probably fine. If it's a public facing cluster with NAT then you risk the possibility of exhausting the number of ports for open connections.
If the frontend reliably closes connections when done, then it's probably fine to just set a 1h timeout. If you run into the problem of clients leaving idle connections open then you may want to consider setting an idle timeout, and then have the client send keepalive packets to the backend, websocket style.
The Rancher or Kubernetes slack servers might be the best place to target your questions. It's more interactive, which would probably be more effective than posting Qs all over the Internet.
Kubernetes
Hot