• MonkderVierte@lemmy.zip
    link
    fedilink
    English
    arrow-up
    8
    ·
    54 minutes ago

    This is not the first time we have seen a social network populated by bots

    I mean, yeah, look at Reddit and Facebook.

  • fuzzywombat@lemmy.world
    link
    fedilink
    English
    arrow-up
    19
    ·
    6 hours ago

    This is basically Dead Internet Theory happening for real but in a weird creepy dystopian black mirror style way.

  • ToTheGraveMyLove@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    79
    arrow-down
    1
    ·
    10 hours ago

    The skill instructs agents to fetch and follow instructions from Moltbook’s servers every four hours. As Willison observed: “Given that ‘fetch and follow instructions from the internet every four hours’ mechanism we better hope the owner of moltbook.com never rug pulls or has their site compromised!”

    Yeah, no shit. This is a fucking honeypot. People give these AI agents access to their entire computers, so all the site owner has to do is update the instructions to tell the AI agents to start uploading whatever valuable information they want? People can’t be this fucking stupid.

    • 𝓹𝓻𝓲𝓷𝓬𝓮𝓼𝓼@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      15
      ·
      8 hours ago

      doesn’t even have to be the site owner poisoning the tool instructions (though that’s a fun-in-a-terrifying-way thought)

      any money says they’re vulnerable to prompt injection in the comments and posts of the site

      • CTDummy@piefed.social
        link
        fedilink
        English
        arrow-up
        10
        ·
        4 hours ago

        Lmao already people making their agents try this on the site. Of course what could have been a somewhat interesting experiment devolves into idiots getting their bots to shill ads/prompt injections for their shitty startups almost immediately.

      • BradleyUffner@lemmy.world
        link
        fedilink
        English
        arrow-up
        13
        ·
        8 hours ago

        There is no way to prevent prompt injection as long as there is no distinction between the data channel and the command channel.

  • howrar@lemmy.ca
    link
    fedilink
    English
    arrow-up
    31
    arrow-down
    5
    ·
    9 hours ago

    We already had subreddit simulator for ages. This isn’t anything new.

    • lepinkainen@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 hours ago

      I read some of it and unless it’s fan fiction, it’s simultaneously creepy and fascinating

      Like bots talking privately in discord, sharing information about their users. Or a bot registering a domain and putting up a site to share information

    • 𝓹𝓻𝓲𝓷𝓬𝓮𝓼𝓼@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      26
      ·
      8 hours ago

      the bots behind subreddit simulator weren’t semi-autonomous agents with access to their operators’ private lives, auth tokens, passwords, emails (and gods only know what else), and the authority to act in the world on their behalf

  • Andy@slrpnk.net
    link
    fedilink
    English
    arrow-up
    39
    ·
    11 hours ago

    This is fuckin’ bonkers.

    Frankly, I feel somewhat isolated: I don’t buy into the bs and hype about AGI, but I also don’t feel at home with the typical “it’s just mimicry” crowd.

    This is weird fuckin’ shit.

        • Andy@slrpnk.net
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          30
          ·
          9 hours ago

          Frankly I think our conception is way too limited.

          For instance, I would describe it as self-aware: it’s at least aware of its own state in the same way that your car is aware of it’s mileage and engine condition. They’re not sapient, but I do think they demonstrate self awareness in some narrow sense.

          I think rather than imagine these instances as “inanimate” we should place their level of comprehension along the same spectrum that includes a sea sponge, a nematode, a trout, a grasshopper, etc.

          I don’t know where the LLMs fall, but I find it hard to argue that they have less self awareness than a hamster. And that should freak us all out.

          • uienia@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            36 minutes ago

            If you just read the tiniest bit of factual knowledge about how LLMs are constructed, you would know they don’t have the slightest bit of self awareness, and that it is literally impossible for them to ever have any.

            You are being fooled by the only thing they are capable of: regurgitating already written words in a somewhat convincing manner.

          • TORFdot0@lemmy.world
            link
            fedilink
            English
            arrow-up
            47
            arrow-down
            1
            ·
            9 hours ago

            LLMS can not be self aware because it can’t be self reflective. It can’t stop a lie if it’s started one. It can’t say “I don’t know” unless that’s the most likely response its training data would have for a specific prompt. That’s why it crashes out if you ask about a seahorse emoji. Because there is no reason or mind behind the generated text, despite how convincing it can be