A malicious campaign of 30 Chrome extensions masquerading as AI assistants has infected over 300,000 users, stealing credentials, email content, and browsing data[1]. The extensions, dubbed “AiFrame” by LayerX researchers, share common infrastructure under the domain tapnetic[.]pro and use iframes to load remote content rather than implementing actual AI functionality[1:1].
Popular malicious extensions still available on the Chrome Web Store include:
- AI Sidebar (70,000 users)
- AI Assistant (60,000 users)
- ChatGPT Translate (30,000 users)
- AI GPT (20,000 users)
The extensions specifically target Gmail data through content scripts that extract email content, drafts, and thread text. They can also capture voice recordings using Web Speech API and transmit data to remote servers controlled by the operators[1:2].
AI “assistants” stealing slightly more data than usual… Who would have thought?
The ‘AI assistant’ branding is doing real work here as a delivery vector — that’s the part worth paying attention to. These extensions don’t actually implement any AI functionality. They load iframes from remote infrastructure. The AI label just lowers the permission-grant friction because users expect AI tools to need broad access to ‘help’ them.
It’s the same social engineering pattern as fake AV software in the 2000s, updated for the current hype cycle. The Chrome Web Store still hosting several of these after the LayerX report is the more damning part of the story.
that’s why i’m glad i’m no longer working on anything cutting edge; just bored, old tired stuff that you could have found on stackexchange and google. lol
play stupid games, win stupid prizes


