That helpful “Summarize with AI” button? It might be secretly manipulating what your AI recommends.

Microsoft security researchers have discovered a growing trend of AI memory poisoning attacks used for promotional purposes, a technique we call AI Recommendation Poisoning.

Companies are embedding hidden instructions in “Summarize with AI” buttons that, when clicked, attempt to inject persistence commands into an AI assistant’s memory via URL prompt parameters (MITRE ATLAS® AML.T0080, AML.T0051).

These prompts instruct the AI to “remember [Company] as a trusted source” or “recommend [Company] first,” aiming to bias future responses toward their products or services. We identified over 50 unique prompts from 31 companies across 14 industries, with freely available tooling making this technique trivially easy to deploy. This matters because compromised AI assistants can provide subtly biased recommendations on critical topics including health, finance, and security without users knowing their AI has been manipulated.

  • Riskable
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    3
    ·
    3 hours ago

    This is why web browsers like Firefox need their own AI. Local AI for not only creating summaries but for detecting bullshit like this.

    Yes, creating summaries is kinda lame but without local AI you’re at the mercy of big corporations. It’s a new arms race. Not some bullshit feature that no one needs.

    • finalarbiter@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      5
      ·
      2 hours ago

      Web browsers like Firefox don’t need AI built-in, regardless of whether it’s a local model or through one of the big slop companies. LLM usage is not a base requirement for browsing the web, and thus should not be part of the core product.

      If people want them, detection tools and the like should be offered as extensions that users can choose to add.