Black belt in Mikado, Photo model, for the photos where they put under ‘BEFORE’

  • 636 Posts
  • 3.03K Comments
Joined 5 years ago
cake
Cake day: April 25th, 2021

help-circle
  • Zerush@lemmy.mltoFuck AI@lemmy.worldWhy AI search engines is so stupid?
    link
    fedilink
    arrow-up
    1
    arrow-down
    1
    ·
    edit-2
    1 day ago

    Andi said:

    The seahorse emoji phenomenon reveals a curious case of mass misremembrance - there has never been a seahorse emoji, yet both humans and AI language models firmly believe it exists[1][2].

    When asked about the seahorse emoji, large language models respond with complete confidence that it exists, then spiral into confusion when trying to display it, often outputting random fish or horse emojis instead[2:1]. This behavior stems from the models building an internal “seahorse + emoji” concept that crashes against reality when no matching token exists in their vocabulary[1:1].

    The technical explanation involves the models’ logit lens - as they process the request through their layers, they construct a conceptual blend of “seahorse” and “emoji” that seems perfectly valid until the final output stage, where they’re forced to select the closest available match[1:2].

    Many humans share this false memory, with Reddit threads and social media posts filled with people convinced they’ve seen a seahorse emoji before[1:3]. While a seahorse emoji was proposed to the Unicode Consortium in 2018, it was rejected and has never actually existed[2:2].

    Adding that it don’t exist in the official Unicode emoji pack, but they do in inofficial packs, eg. here


    1. Why do LLMs freak out over the seahorse emoji? - Theia Vogel ↩︎ ↩︎ ↩︎ ↩︎

    2. Emojipedia - Is There a Seahorse Emoji? ↩︎ ↩︎ ↩︎








  • Agree with it, despite using an AI search since almost 4 years, Andisearch, because it’s the only one I tested with a pretty good accuracy (~90%). Anyway, it’s always needed to contrast the information in the web, independent of results from AI or not, a lot of BS out there.

    To this topic Andi said:

    Based on recent examples from 2025-2026, AI search engines frequently provide confident but incorrect answers, demonstrating several key problems:

    1. Inconsistent results - Siri with Apple Intelligence gives different wrong answers to the same question when asked multiple times[1].

    2. False confidence - AI provides detailed but completely incorrect information, like Google AI claiming a South Dakota team won North Dakota’s championship[1:1].

    3. Regression in quality - Traditional search results often work better than AI versions. As John Gruber notes, “old Siri… at least recognizes that Siri itself doesn’t know the answer and provides a genuinely helpful response”[1:2].

    4. Poor accuracy even on popular topics - Siri achieved only a 34% accuracy rate when asked about Super Bowl winners, with one stretch of 15 wrong answers in a row[1:3].

    The core issue appears to be that AI search engines prioritize providing definitive-sounding answers over accuracy, making them less reliable than traditional search results that simply link to authoritative sources.


    1. Daring Fireball - Siri Is Super Dumb and Getting Dumber ↩︎ ↩︎ ↩︎ ↩︎