Kubernetes

alienscience , in Kubernetes Is Overkill for 99% of Apps (We Run 500k Logs/Day on Docker Compose)

That's a cool docker compose setup and is definitely competitive with a single node k8s deployment I run for hobby projects (k3s). The simplicity of this docker compose setup is an advantage, but it is missing some useful features compared to k8s:

  • No Let's Encrypt support for Nginx using Cert Manager.
  • No git ops support, the deployment shell script is manually run.
  • No management ui's like Lens or k9s.
  • Will not scale to multiple machines without changing the technology to k8s or Nomad.

That said, I would be happy to run their setup if I hadn't gone through the pain of learning k8s.

usernameunnecessary , in Kubernetes Is Overkill for 99% of Apps (We Run 500k Logs/Day on Docker Compose)

Our infrastructure? One server. One docker-compose.yml file. Zero Kubernetes complexity.

If your business/use case can tolerate manually rebuilding a server and redeploying your stack in case of hardware failure, I don't see why we're even discussing kubernetes.

synae , in Kubernetes Is Overkill for 99% of Apps (We Run 500k Logs/Day on Docker Compose)
@synae@lemmy.dbzer0.com avatar

Ok? But it's also more fun. Do whatever you want, I'll stick with k8s

sherlockhomelessok , in Ingress NGINX: Statement from the Kubernetes Steering and Security Response Committees

We use nginx-ingress-controller alongside another opt-in, paid solution. Very sad to see that this is how orgs are thinking about open source. Thousands of people are profiting from the foundational work of a few.

towerful , in Ingress NGINX: Statement from the Kubernetes Steering and Security Response Committees

The retirement of Ingress NGINX was announced for March 2026, after years of public warnings that the project was in dire need of contributors and maintainers. There will be no more releases for bug fixes, security patches, or any updates of any kind after the project is retired.
This cannot be ignored, brushed off, or left until the last minute to address. We cannot overstate the severity of this situation or the importance of beginning migration to alternatives like Gateway API or one of the many third-party Ingress controllers immediately.

I know it's literally the first paragraph, but I thought it worth commenting for those that only read the title & comments

higgsboson , in Ingress NGINX: Statement from the Kubernetes Steering and Security Response Committees

ingress-nginx was an Ingress controller for Kubernetes using NGINX as a reverse proxy and load balancer.

onlinepersona , in Kubernetes with 1 million nodes

How can OpenAI have such talented individuals and then mindlessly bombard the internet and ignore robots.txt and every single convention out there?

femtek ,

Great engineers with unscrupulous morals?

onlinepersona ,

That... Does explain it quite nicely indeed.

dinckelman ,

Their entire business model requires them to ignore any moral choices, and exhaust every illegal one they can afford

onlinepersona ,

And yet, somehow the justice system doesn't do anything.

dinckelman ,

Correct, because they pocket lobbyist money, and are complicit

femtek , in How-to: Cloudnative PG serving MongoDB with Automated Recovery from Continuous Backups

Well I'll keep tabs on this, we use cnpg and will be good to have this in my back pocket, also liked the steps on recovery as I'll use that in the future.

rglullis , in Why are long ingress timeouts bad?
@rglullis@communick.news avatar

I asked how long they need the timeout to be. They requested 1 hour.

That's outright insane. Does this mean that if they connection has any type of hiccup, all their work is lost?

Instead of having web apps working directly out of request-response cycle, these long running jobs need to be treated as a proper separate task, which gets a proper record entry in their database and could be queried for the results later.

lemmyng , in Why are long ingress timeouts bad?

It depends. If it's an internal facing cluster with little other traffic then it's probably fine. If it's a public facing cluster with NAT then you risk the possibility of exhausting the number of ports for open connections.

If the frontend reliably closes connections when done, then it's probably fine to just set a 1h timeout. If you run into the problem of clients leaving idle connections open then you may want to consider setting an idle timeout, and then have the client send keepalive packets to the backend, websocket style.

evujumenuk , in Why are long ingress timeouts bad?

There's a lot of smaller and bigger possible problems with that, but I think there's only one way to find out if those can become actual problems.

Try it, and report back.

NerdsGonnaNerd , in Helm Diff - A Helm plugin that gives a preview of resource changes

Nice Project! Can I use that if I deployed everything with fluxcd as well?

ExperimentalGuy , in Good alternatives to Medium/substack?

Theres bearblog. You can host an instance or post on the main one.

Owljfien , in KubeVirt user interface options | KubeVirt.io

Useful info for me who uses cockpit and is considering dabbling into kube

jbloggs777 , (edited ) in How to see what is using flannel or circumvent flannel address usage in kubernetes?

The Rancher or Kubernetes slack servers might be the best place to target your questions. It's more interactive, which would probably be more effective than posting Qs all over the Internet.

SpiderUnderUrBed OP ,

I'll go there, i don't constantly post questions but I've just been recently having alot of issues with CNI's, I might just delete this post