• 6 Posts
  • 692 Comments
Joined 2 years ago
cake
Cake day: September 11th, 2023

help-circle


  • Yeah but we’re talking diminishing returns here. Doubling the resolution to 8k makes about as much sense as doubling refresh rates to 480hz. At that point it’s going to be mostly dependent on the individual, and likely heavily subject to the placebo effect.

    By my math, a 55" 8k screen has pixels that are 0.056" (56 thou) wide.

    At ten feet, that subtends an angle of 0.268 degrees or 1.6 arcminutes.

    There’s obviously a lot of variation and it depends on exactly what you’re measuring, but normal human visual acuity struggles to distinguish details less than about 5 arcminutes, maybe 1-2 arcminutes depending on the test.


  • It’s because we’re at the limits of the human visual system. The difference in pixel pitch between 4k and 8k at the distances we watch TV is literally imperceptible.

    It also doesn’t help that there’s not much content authored and distributed for higher resolutions. It’s exponentially more expensive to produce, store, and deliver.

    Home Internet connections on average aren’t any better than they were ten years ago, either, at least not in the US. I doubt a lot of them can even support 8k streaming, let alone with anyone else using it at the same time.






  • I fucking hate when people ask an LLM “what were you thinking” because the answer is meaningless, and it just showcases how little people understand of how they actually work.

    Any activity inside the model that could be considered any remote approximation of “thought” is completely lost as soon as it outputs a token. The only memory it has is the context window, the past history of inputs and outputs.

    All it’s going to do when you ask it that, is it’s looking over the past output and attempting to rationalize what it previously output.

    And actually, even that is excessively anthropomorphizing the model. In reality, it’s just generating a plausible response to the question “what were you thinking”, given the history of the conversation.

    I fucking hate this version of “AI”. I hate how it’s advertised. I hate the managers and executives drinking the Kool-Aid. I hate that so much of the economy is tied up in it. I hate that it has the energy demand and carbon footprint of a small nation-state. It’s absolute insanity.





  • I’ve long maintained that actually writing code is only a small part of the job. Understanding the code that exists and knowing what code to write is 90% of it.

    I don’t personally feel that gen AI has a place in my work, because I think about the code as I’m writing it. By the time I have a complete enough understanding of what I want the code to do in order to write it into a prompt, the work is already mostly done, and banging out the code that remains and seeing it come to life is just pure catharsis.

    The idea of having to hand-hold an LLM through figuring out the solution itself just doesn’t sound fun to me. If I had to do that, I’d rather be teaching an actual human to do it.




  • But at a certain point, it seems like you spend more time babysitting and spoon-feeding the LLM than you do writing productive code.

    There’s a lot of busywork that I could see it being good for, like if you’re asked to generate 100 test cases for an API with a bunch of tiny variations, but that kind of work is inherently low value. And in most cases you’re probably better off using a tool designed for the job, like a fuzzer.


  • I’ve maintained for a while that LLMs don’t make you a more productive programmer, they just let you write bad code faster.

    90% of the job isn’t writing code anyway. Once I know what code I wanna write, banging it out is just pure catharsis.

    Glad to see there’s other programmers out there who actually take pride in their work.