Check Point® Software Technologies Ltd. announced its inclusion in Fast Company’s Next Big Things in Tech 2025 list.
Whether it's building an SDK, launching a new application title, or knocking out a version update with advanced capabilities, speed remains a primary competitive driver for development organizations and their customers. To that end, ultra-fast and frictionless mobile application development increasingly depends on automation.
More specifically, DevOps teams are readily embracing modern tools that utilize large language models (LLMs), generative AI (GenAI), and the very buzzy agentic AI to accelerate their continuous integration/continuous delivery (CI/CD) pipelines. An estimated 70% of professional developers will be using AI-powered coding tools by 2027; Google claims that more than a quarter of their new code is already generated by AI.
But AI's tremendous potential business value is currently outshining some very real risks to mobile applications and the broader software supply chain.
Code Flaws and Opaque Dependencies
To start with, AI tools are prone to making common mistakes in DevOps environments, including generating hardcoded secrets in code, misconfiguring infrastructure-as-code (IaC) with open permissions, and overlooking secure CI/CD pipeline configurations.
AI-based development tools also increase risks stemming from dependency chain opacity in mobile applications. Blind spots in the software supply chain will increase as AI agents and coding assistants are tasked with autonomously selecting and integrating dependencies. Since AI simultaneously pulls code from multiple sources, traditional methods of dependency tracking will prove insufficient.
To mitigate the risks to mobile applications, any AI-generated code should undergo rigorous review to identify potential security vulnerabilities and quality issues early on, before they lead to costly problems downstream. Unfortunately, the responsibility for ensuring this kind of review before a release is often overlooked, and these simple, unforced errors are just the first line of potential hazards.
Slopsquatting, Hallucinations, and Bad Vibes
Any tool that brings positive benefits can also be abused or misused, and GenAI is no different. The term "slopsquatting" has emerged to describe instances where a threat actor registers a software package that doesn't actually exist. Similar to "typosquatting" (where malicious actors count on human spelling errors), slopsquatting anticipates a developer's misplaced trust in AI suggestions. If a developer installs one of these fake packages without first verifying it, malicious code can be introduced into the project.
Another issue is that many large frontier LLMs are trained on open-source software rather than on proprietary databases of secure code. As such, these LLMs are susceptible to replicating common open-source vulnerabilities, as well as data poisoning and malware attacks by malicious actors. Researchers recently discovered a specific instance where threat actors exploited machine learning (ML) models using the Pickle file format to conceal malware inside seemingly legitimate AI-related software packages.
Perhaps even more concerning, LLMs may recommend vulnerable, insecure, or non-existent open-source libraries independently. These package hallucinations can lead to a novel form of package confusion attack for careless developers. The hallucination problem is also predictably pervasive. A recent university study of over 500,000 LLM-generated code samples found that nearly 1 in 5 packages suggested by AI didn't exist. They discovered 205,474 unique examples of hallucinated package names; commercial models were 5.2% likely to include at least one hallucinated package, and that rate jumped to 21.7% for open-source models.
While these vulnerabilities may seem isolated, they can have far-reaching downstream implications for software supply chains. A prompt injection vulnerability might allow an LLM to be manipulated through malicious inputs to generate incorrect or insecure code that spreads through connected systems. One such prompt injection vulnerability was discovered in OpenAI's ChatGPT late last year.
The developer trend of intuitive "vibe coding" may take package hallucinations into serious bad trip territory. The term refers to developers using casual AI prompts to generally describe a desired mobile app outcome; the AI tool then generates code to achieve it. Counter to the common wisdom of zero trust, vibe coding tends to lean heavily on trust; developers very often copy and paste code results without any manual review checks. Any hallucinated packages that get carried over can become easy entry points for threat actors.
Agentic AI Amplifies the Chances for Trouble
According to OWASP, agentic AI represents an advancement in autonomous systems. Integration with LLMs and GenAI has significantly expanded the scale and capabilities of using these tools, as well as the associated risks. Relying on these complex multi-agent systems not only intensifies dependency opacity and multiplies the chances for error generation, it also creates opportunities for code generation tool misuse by malicious actors. OWASP specifically calls out the potential for new attack vectors using Remote Code Execution (RCE) and other code attacks.
While some predict that agentic AI will disrupt the mobile application landscape by ultimately replacing traditional apps, other modes of disruption seem more immediate. For instance, researchers recently discovered an indirect prompt injection flaw in GitLab's built-in AI assistant Duo. This could allow attackers to steal source code or inject untrusted HTML into Duo's responses and direct users to malicious websites.
Build Security into the Mobile App SDLC
While the advertised efficiency, cost, and time-to-market advantages of AI-assisted development are all tantalizing, those savings would be only short-term gains if they ultimately lead to a security incident. The associated challenges and risks to development organizations are not going unnoticed. A recent Gartner survey of software engineering/application development leaders in the US and UK found that the use of AI tools to augment software engineering workflows was a significant or moderate pain point for 71% of respondents.
To actualize the potential value of AI in DevOps, organizations need to treat these powerful tools like any other user, device, or application within the Zero Trust framework. Developers need to de-risk AI adoption by embracing effective solutions for testing, protection, and monitoring. A secure software development lifecycle (SDLC) for mobile applications is one that integrates security across every phase, including solutions for:
■ Mobile application security testing (MAST) that maintains development speed without compromising security.
■ Code hardening and obfuscation tools to make reverse engineering significantly more difficult for threat actors.
■ Runtime application self-protection (RASP) to detect and block tampering attempts while the app is running.
■ App attestation to ensure that only legitimate, trusted apps can interact with your APIs and protect your application from bots, malware, fraud, and targeted attacks.
■ Real-time threat monitoring to continuously observe the app in the field as the threat landscape evolves.
Industry News
Kong announced the native availability of Kong Identity within Kong Konnect, the unified API and AI platform.
Amazon Web Services (AWS) is introducing a new generative AI developer certification, expanding its portfolio for professionals seeking to develop their cloud engineering skills.
Kong unveiled KAi, a new agentic AI co-pilot for Kong Konnect, the unified API and AI platform.
Azul and Cast AI announced a strategic partnership to help organizations dramatically improve Java runtime performance, reduce the footprint (compute, memory) of cloud compute resources and ultimately cut cloud spend.
Tricentis unveiled its vision for the future of AI-powered quality engineering, a unified AI workspace and agentic ecosystem that brings together Tricentis’ portfolio of AI agents, Model Context Protocol (MCP) servers and AI platform services, creating a centralized hub for managing quality at the speed and scale of modern innovation.
Kong announced new support to help enterprises adopt and scale MCP and agentic AI development.
Copado unveiled new updates to its Intelligent DevOps Platform for Salesforce, bringing AI-powered automation, Org Intelligence™, and a new Model Context Protocol (MCP) integration framework that connects enterprise systems and grounds AI agents in live context without silos or duplication.
Xray announced the launch of AI-powered testing capabilities, a new suite of human-in-the-loop intelligence features powered by the Sembi IQ platform.
Redis announced the acquisition of Featureform, a framework for managing, defining, and orchestrating structured data signals.
CleanStart announced the expansion of its Docker Hub community of free vulnerability-free container images, surpassing 50 images, each refreshed daily to give developers access to current container builds.
The Cloud Native Computing Foundation® (CNCF®), which builds sustainable ecosystems for cloud native software, announced the graduation of Knative, a serverless, event-driven application layer on top of Kubernetes.
Sonatype announced the launch of Nexus Repository available in the cloud, the fully managed SaaS version of its artifact repository manager.
Spacelift announced Spacelift Intent, a new agentic, open source deployment model that enables the provisioning of cloud infrastructure through natural language without needing to write or maintain HCL.