Who tf actually uses Docker Swarm anyway?
Who tf actually uses Docker Swarm anyway?


Yeah but we’re talking diminishing returns here. Doubling the resolution to 8k makes about as much sense as doubling refresh rates to 480hz. At that point it’s going to be mostly dependent on the individual, and likely heavily subject to the placebo effect.
By my math, a 55" 8k screen has pixels that are 0.056" (56 thou) wide.
At ten feet, that subtends an angle of 0.268 degrees or 1.6 arcminutes.
There’s obviously a lot of variation and it depends on exactly what you’re measuring, but normal human visual acuity struggles to distinguish details less than about 5 arcminutes, maybe 1-2 arcminutes depending on the test.


It’s because we’re at the limits of the human visual system. The difference in pixel pitch between 4k and 8k at the distances we watch TV is literally imperceptible.
It also doesn’t help that there’s not much content authored and distributed for higher resolutions. It’s exponentially more expensive to produce, store, and deliver.
Home Internet connections on average aren’t any better than they were ten years ago, either, at least not in the US. I doubt a lot of them can even support 8k streaming, let alone with anyone else using it at the same time.


It’s possible to design fluid couplings for torque. That’s what a torque converter in an automatic transmission does.
It’s not quite as elegant, of course, but still.
Oh yeah, with thrilling autobiographies such as:
For fun I like to: Walk my dog in the park!
And
2 Truths and a Lie:
- I spent the night in jail
- I’ve never broken a bone
- I once met <B-list celebrity>
Ahh, Hinge: the app for people that are too sophisticated for Tinder, but still superficial enough that their profiles are 90% photos.


deleted by creator


I fucking hate when people ask an LLM “what were you thinking” because the answer is meaningless, and it just showcases how little people understand of how they actually work.
Any activity inside the model that could be considered any remote approximation of “thought” is completely lost as soon as it outputs a token. The only memory it has is the context window, the past history of inputs and outputs.
All it’s going to do when you ask it that, is it’s looking over the past output and attempting to rationalize what it previously output.
And actually, even that is excessively anthropomorphizing the model. In reality, it’s just generating a plausible response to the question “what were you thinking”, given the history of the conversation.
I fucking hate this version of “AI”. I hate how it’s advertised. I hate the managers and executives drinking the Kool-Aid. I hate that so much of the economy is tied up in it. I hate that it has the energy demand and carbon footprint of a small nation-state. It’s absolute insanity.
That’s funny, I didn’t notice that lol
Animals never had a war
Guess they call what happened in Tanzania in 1974 a “chimp police action” instead.


deleted by creator


I’ve long maintained that actually writing code is only a small part of the job. Understanding the code that exists and knowing what code to write is 90% of it.
I don’t personally feel that gen AI has a place in my work, because I think about the code as I’m writing it. By the time I have a complete enough understanding of what I want the code to do in order to write it into a prompt, the work is already mostly done, and banging out the code that remains and seeing it come to life is just pure catharsis.
The idea of having to hand-hold an LLM through figuring out the solution itself just doesn’t sound fun to me. If I had to do that, I’d rather be teaching an actual human to do it.
“AI bubble causing economic collapse” here.


Great source of protein crawling there across the ground, it’d be a shame if someone wasn’t there to eat it
But at a certain point, it seems like you spend more time babysitting and spoon-feeding the LLM than you do writing productive code.
There’s a lot of busywork that I could see it being good for, like if you’re asked to generate 100 test cases for an API with a bunch of tiny variations, but that kind of work is inherently low value. And in most cases you’re probably better off using a tool designed for the job, like a fuzzer.
I’ve maintained for a while that LLMs don’t make you a more productive programmer, they just let you write bad code faster.
90% of the job isn’t writing code anyway. Once I know what code I wanna write, banging it out is just pure catharsis.
Glad to see there’s other programmers out there who actually take pride in their work.


As an American who hates AI, fucking send it, EU.


All that flowery bullshit, and I still have no idea if the game is any good or not. Sure, it didn’t sound fun, but lots of good games sound really boring if you describe them the wrong way.
Games journalism at its best, clearly.
There’s no way in hell Paramount has that much cash available. This is gonna be a leveraged buyout of epic proportions, and it’s gonna crash and burn epically too.