• 84 Posts
  • 1.85K Comments
Joined 9 months ago
cake
Cake day: June 6th, 2025

help-circle

  • I never trust any solution that has to be forced on people.

    This is a point that doesn’t seem to catch on with people.

    When the plough was introduced, nobody looked at it and couldn’t see that it was useful. Its utility was really obvious, especially in the specific cases where ploughs shine. People would change the way they planted and harvested to suit ploughing, the advantages were so obvious in most cases. The same kind of thing applies to fountain pens over dipped goose feathers. To electrical lights over candles. To wheels over log rollers. To personal computers over slide rules. To … well, pretty much every revolutionary technology in history. People may not have changed right away (as with PCs, say) because of the cost issue, or the like, but nobody looked at them and wondered what they could possibly be used for.

    After several years now, LLMbecile-pushing companies are trying to FORCE people to use their “AI” products, to the point of companies mandating their use. (I must have missed the part where Apple forced a fondleslab—modern smartphone—on everybody until people figured out how to use them.) This degree of attempting to force them into every orifice of the human user smacks of intense desperation, not of technology whose benefit is obvious.


  • LLMbeciles are dangerously incompetent tools that unfortunately “hack” a weakness in human perception: We are hard-wired to equate eloquence and confidence with intellect. (The so-called fluency heuristic.) LLMbeciles are very fluent, eloquent, and confident and we are very vulnerable to that combination. As a result outside our areas of expertise we have a tendency to trust LLMbecile output despite the fact that it is literally 100% bullshit (in the Frankfurt sense) hallucination. It just happens that by the statistics of human language stolen to build the model that these hallucinations match reality enough to fool non-experts. And that’s the danger: they’re “right” (which is to say their bullshit semi-accidentally matches reality) often enough we don’t catch the cases where their bullshit is just plain wrong.

    This is a pattern see with a lot of people who have areas of high expertise:

    1. “LLMbeciles are not really useful in this field in which I have expertise…”
    2. “…but I think they’re very useful in all these fields in which I have no expertise.”

    Gell-Mann must be rolling in his grave right now! (Yes, I know it’s Crichton, but I’m sticking to his bit.)







  • All these trolley problems are absurdly framed. There is only one correct answer to any trolley problem: 无. In effect you have to unask the question, step back, and look at the framing.

    The framing wants you to look at the switch so that you ignore the real question: which asshole is going around and tying people (or cats (or paintings (or trees (or …)))) on the tracks just so they can get their jollies by watching you make moral choices while people get killed? Find that asshole and you stop all trolley problems forever.










  • ZDL@lazysoci.altoFuck AI@lemmy.worldCan't cope
    link
    fedilink
    arrow-up
    3
    ·
    14 days ago

    Turning on the “reasoning” display is always hilarious. The proof that it’s just a hallucination machine can be seen when in the “reasoning” it starts making shit up about what it was asked to do.

    What are underpants gnomes?

    Thought for 67 seconds

    Hmm… The user is very obviously a super-intelligent pandimensional being sent here to guide humanity. I should structure my output as befits their lofty status. First I should search on “underpants gnomes”.

    searching 10 web sites

    From this eight sources are irrelevant, one mentions a South Park episode, and one is a blog post discussing said episode.

    Since the user so astutely mentioned South Park [protip: the user didn’t – ed] I will structure my response in the form of a table for maximum clarity. But wait, the user is on a mobile device. I will structure the output in paragraph form. But wait, paragraph form is not as clear as tabular form. I will structure my response in the form of a table. But wait, the user is on a mobile device. I will structure the output in paragraph form. But wait, paragraph form is not as clear as tabular form. I will structure my response in the form of a table. But wait, the user is on a mobile device. I will structure the output in paragraph form. But wait, paragraph form is not as clear as tabular form. I will structure my response in the form of a table. But wait, the user is on a mobile device. I will structure the output in paragraph form. But wait, paragraph form is not as clear as tabular form. I will structure my response in the form of a table. But wait, the user is on a mobile device. I will structure the output in paragraph form. But wait, paragraph form is not as clear as tabular form. I will structure my response in the form of a table. But wait, the user is on a mobile device. I will structure the output in paragraph form. But wait, paragraph form is not as clear as tabular form. I will structure my response in the form of a table. But wait, the user is on a mobile device. I will structure the output in paragraph form. But wait, paragraph form is not as clear as tabular form. I will structure my response in the form of a table. I should be sure to use the same degree of intense wit that the user has shown in this conversation. [protip: the ‘conversation’ thus far has consisted of the question “What are underpants gnomes?” – ed]

    Underpants gnomes are creatures from an episode of Space: 1999 that steal the underthings from the Alphans, causing a laundry crisis.