Check Point® Software Technologies Ltd. announced its inclusion in Fast Company’s Next Big Things in Tech 2025 list.
If you hired a junior developer who made up package names at a rate of 20%, how long would they last? Let's say they work 24/7, take negative feedback without blinking, and write code faster than anyone you've ever met. Does that change the equation?
While I firmly believe a person's ChatGPT history should be between them and their priest, software development is another story. The Cloudsmith Artifact Management Survey 2025 showed that, when speaking to developers using AI, 42% said their codebase was now mostly AI-generated.
Without thorough reviews, that's a big problem for anyone in the organization in charge of the CI/CD pipeline. Now, they find themselves at risk of "slopsquatting" — a new threat vector that's exploiting generative AI's tendency to invent package names out of whole cloth to sneak malicious code into production environments.
Let's examine where this new type of attack is coming from — and why so many developers are using AI in the first place.
Why Developers Use AI
In among the doom and gloom, it's worth reminding ourselves why over three-quarters (76%) of developers are using AI to generate code. In short, it makes your life easier. A developer's day-to-day involves so many repetitive tasks, so much checking and re-checking, that any chance to focus on the more interesting, challenging work is a massive bonus.
You want to feel like you're making progress. As a developer, the worst thing that can happen is for you to spend a whole day bashing your head against a brick wall and walk out no closer than you were when you started. That's why so many people have turned to AI.
By automating the low-level work you've done a million times before, you conserve your mental energy for the tasks that need it most. A GitHub study found 73% of developers found its AI coding assistant helped them stay in the flow, while 87% agreed it preserved mental effort during repetitive tasks.
However, despite all these benefits, only 20% of developers fully trust AI-generated code. That's normal; code reviews and PRs are as natural to software development as breathing. But, with AI, developers have a very good reason to be cautious.
Hallucinations and Mirages
If there's one thing that's been well-established about generative AI at this point, it's that it makes things up. Researchers at the University of Texas at San Antonio, the University of Oklahoma, and Virginia Tech ran 16 open-source and proprietary coding copilots against each other, generating 376,000 code samples in the process, to measure their accuracy. They discovered that commercial models "hallucinated" (i.e. made up) package names at a rate of 5.2% — while open-source models were even more unreliable, hallucinating at a rate of 21.7%.
Meanwhile, Trend Micro backed up this observation. It reported an advanced AI agent which generated a very plausible package name, before crashing moments later with a "module not found" error. When you're pulling from public repositories upstream on a daily — or hourly — basis, developers are clearly correct to be nervous about allowing AI unrestricted access to builds.
Software engineers are well-practiced at reviewing code and pointing out clear errors and inconsistencies like these, so — an outside observer might ask — where's the problem?
The Mythical Man-Month
Since the dawn of this profession, developers and engineers have been under pressure to ship faster and deliver bigger projects. The business wants to unlock a new revenue stream or respond to a new customer need — or even just get something out faster than a competitor. With executives now enamored with generative AI, that demand is starting to exceed all realistic expectations.
As Andrew Boyagi at Atlassian told StartupNews, this past year has been "companies fixing the wrong problems, or fixing the right problems in the wrong way for their developers." I couldn't agree more.
Fred Brooks wrote a famous book of essays in 1975 about software engineering, called "The Mythical Man-Month." In it, one of the most enduring quotes was: "The bearing of a child takes nine months, no matter how many women are assigned." His point is that, while adding more resources might increase the output, it doesn't necessarily make things go faster.
AI can take out the repetitive work and offer ideas about how to solve a thorny problem, but if it can't be relied upon to check its own work, how can it supercharge developers' capacity in the way some people expect?
The Vetting Process
Nobody signed up as a DevOps Engineer to sift through dependency errors made up by a chatbot — apart from a few hardy souls. Still, that's what you seem to spend most of your time doing. An overstretched developer accepts a suggestion, commits it, and downstream the DevOps team is left to unpick the consequences
Research shows that 66% of AI-using developers only trust its code after manual review, and 41% identified code generation as the riskiest point of AI influence. But the pressure to deliver is still there — and so, of respondents in the same study, only 67% of those using AI actually review the code before deployment.
While it may be annoying to go in and fix errors created by imaginary package names, this trend is contributing to a larger vulnerability — one built on the well-known limitations of today's coding copilots.
"Slopsquatting" and the Dangers of Hallucination
This year, we've seen the rise of a new term: "slopsquatting." It's the descendant of our good friend typosquatting, and it involves malicious actors exploiting generative AI's tendency to hallucinate package names by registering those fake names in public repos like npm or PyPi.
Slopsquatting is a variation on classic dependency chain abuse. The threat actor hides malware in the upstream libraries from which organizations pull open-source packages, and relies on insufficient controls or warning mechanisms to allow that code to slip into production.
Only 29% of teams feel very confident in their ability to detect malicious code in open-source libraries. That's especially unfortunate, as this is the very ecosystem where AI tooling tends to source its suggestions. In the worst case, you're facing sensitive data leakage or remote code execution.
The key is to create automated policy enforcement at the package level. This creates a more secure checkpoint for AI-assisted development, so no single person or team is responsible for manually catching every vulnerability.
Creating a Secure Checkpoint
There are three key controls I'd recommend to guard against slopsquatting attacks. First, set up automatic policy enforcement to flag any unreviewed or unverified AI-generated artifacts. These shouldn't rely on manual policing or developer discretion, anything generated by AI should trigger additional checks by default. That includes code, config files, and especially dependency declarations.
Second, focus on artifact provenance tracking so you can tell the difference between human-written and AI-authored code. This will power your automatic policy enforcement. Knowing who or what introduced a given change lets you apply scrutiny proportionally and tighten the review process where trust is lower.
Finally, I'd recommend that you pull trust signals into your pipeline itself, so reviews become automatic rather than optional. The problem with optional reviews is that, when pressure from above increases, they're the first thing to go.
A New Frontier
Generative AI isn't going anywhere fast. Google CEO Sundar Pichai claims over 25% of its code is being written by AI, while Mark Zuckerberg wants half of Meta's coding to be AI-generated by 2026. Regardless of how this reflects developers' reality on the ground, we're in a new world now. So, we need to work out how we're going to make sure that our environments are safe, secure, and trustworthy.
Most DevOps engineers I know could spot hallucinated package names with their eyes closed. But the scale and speed of modern software development makes it unrealistic for any one person to have their eye on everything at once. When it comes to security, the best thing you can do is make it automatic. That starts with a pipeline you can trust.
Industry News
Kong announced the native availability of Kong Identity within Kong Konnect, the unified API and AI platform.
Amazon Web Services (AWS) is introducing a new generative AI developer certification, expanding its portfolio for professionals seeking to develop their cloud engineering skills.
Kong unveiled KAi, a new agentic AI co-pilot for Kong Konnect, the unified API and AI platform.
Azul and Cast AI announced a strategic partnership to help organizations dramatically improve Java runtime performance, reduce the footprint (compute, memory) of cloud compute resources and ultimately cut cloud spend.
Tricentis unveiled its vision for the future of AI-powered quality engineering, a unified AI workspace and agentic ecosystem that brings together Tricentis’ portfolio of AI agents, Model Context Protocol (MCP) servers and AI platform services, creating a centralized hub for managing quality at the speed and scale of modern innovation.
Kong announced new support to help enterprises adopt and scale MCP and agentic AI development.
Copado unveiled new updates to its Intelligent DevOps Platform for Salesforce, bringing AI-powered automation, Org Intelligence™, and a new Model Context Protocol (MCP) integration framework that connects enterprise systems and grounds AI agents in live context without silos or duplication.
Xray announced the launch of AI-powered testing capabilities, a new suite of human-in-the-loop intelligence features powered by the Sembi IQ platform.
Redis announced the acquisition of Featureform, a framework for managing, defining, and orchestrating structured data signals.
CleanStart announced the expansion of its Docker Hub community of free vulnerability-free container images, surpassing 50 images, each refreshed daily to give developers access to current container builds.
The Cloud Native Computing Foundation® (CNCF®), which builds sustainable ecosystems for cloud native software, announced the graduation of Knative, a serverless, event-driven application layer on top of Kubernetes.
Sonatype announced the launch of Nexus Repository available in the cloud, the fully managed SaaS version of its artifact repository manager.
Spacelift announced Spacelift Intent, a new agentic, open source deployment model that enables the provisioning of cloud infrastructure through natural language without needing to write or maintain HCL.