From Prompt Injection to Vibe Coding: How GenAI Is Reshaping Software Development
September 04, 2025

May Wang
Palo Alto Networks

In the history of how Generative AI (GenAI) moved from bleeding-edge technology to the core of enterprise operations, 2024 was a banner year. GenAI traffic surged by over 890%, driven by maturing AI models, increased automation, and tangible productivity gains. What this banner year also shows is that organizations are trying out new approaches that embed AI in daily workflows. Organizations across the globe that are integrating GenAI into daily workflows are seeing productivity gains translate into an incremental economic impact of up to 40%.

A recent Palo Alto Networks report highlights the dual nature of GenAI tools: their success in areas like writing, testing, and deploying code, and the new risks they introduce, such as data exposure and malicious code generation. For DevOps teams, the key to success will be to leverage GenAI's power while ensuring control, security, and accountability.

Adoption Patterns of GenAI in App Development

The top GenAI use cases accounting for 73.5% of transactions — instances of AI usage or queries — today aren't specific to developers. These are the applications nearly every user in an organization can benefit from: writing assistants (34%), conversational agents (28.9%), and enterprise search (10.6%). Each person has an opportunity to change their daily work thanks to these innovations.

Beyond those more general use cases, developers are clearly power users, since AI developer platforms make it to the fourth spot with over 10% of all GenAI usage. These platforms offer developers real-time support by automating repetitive tasks and accelerating the software development lifecycle. AI copilots are increasingly integrated into the development environment, enabling developers to focus on more creative tasks and problem solving while AI handles the rest.

The impact of GenAI tools is widespread, though developers in different industries are adopting at a variety of rates. Those industries that require a bit of collaboration with humans, like doctors, financial regulators, or lawyers, have AI in some of their workflows, but robust adoption is further off.

Industries with fewer human touchpoints have been fertile ground for GenAI apps. High-tech and manufacturing sectors, for instance, together account for nearly 39% of GenAI coding transactions. In manufacturing, GenAI tools are used to rapidly prototype designs, automate quality inspections, and optimize supply chains. In high tech organizations, they help teams deliver software faster and with fewer bugs, freeing engineers to focus on innovation rather than maintenance.

Emerging Risks

Yet, the same tools that help people to work smarter and faster can also introduce systemic risks if not properly governed. Threat actors can manipulate AI agents through techniques like prompt injection, altering the memory and behavior of GenAI tools to produce flawed outputs or gain unauthorized access to enterprise systems.

These risks require that developers remain vigilant, understanding that while GenAI can enhance productivity, it can also be exploited when not properly secured. The takeaway for developers is clear: every GenAI tool introduces potential vulnerabilities, and understanding how these tools process and store data is essential to making decisions that keep organizations secure.

The average organization has about 66 GenAI apps in use, and of those, an average of 6.6 are classified as high-risk due to weak security controls that leave them open to data leaks and compliance failures. When that's paired with the fact that high tech organizations transferred roughly 53 GB of downloads and 14 GB of uploads per company to apps in the "Code Assistant and Generator" category in 2024, the degree of risk becomes apparent.

These facts are especially concerning given that researchers were able to exfiltrate sensitive data from code-generating conversational apps with a nearly 10% success rate. To offset these risks, developers need to scrutinize how their tools handle data and enforce strict usage policies. They must be aware of the risks associated with AI developer platforms, including data exposure, malicious code execution, and legal uncertainties around generated content.

As adoption grows, so does the need for secure development practices. Developers should prioritize tools that offer transparency, auditability, and compliance features, and work closely with security teams to vet third-party GenAI tools before integration.

Pitfalls in AI-Driven Development Workflows

As GenAI tools become more sophisticated, they also become more opaque. Just one example is the emerging trend of "vibe coding," where developers and non-developers rely on AI to generate code based on non-technical prompts. While vibe coding can accelerate development, how AI-generated code works under the hood is largely unknown to the prompter. Without visibility, developers will have a harder time debugging, auditing, and securing outputs.

The risks extend beyond code quality. AI-generated outputs can make use of insecure libraries or even embedded vulnerabilities. Without rigorous validation, these issues can spread throughout systems, leading to security breaches. Between potential quality issues and new vulnerabilities, employees are more likely to expose intellectual property, personal data, and other sensitive data.

For developers, the key is to treat AI-generated code as a starting point, not a final product. Every suggestion should be reviewed, tested, and validated before deployment. Developers must also advocate for transparency in AI tools, demanding features that allow them to trace the origin and rationale behind generated code. In this new era of AI-driven development, understanding how the tools work under the hood is not optional.

While GenAI apps present numerous risks to developers, they can be mitigated by understanding how the tools work, and being familiar with their flaws. GenAI will have immense upside across industries, secure usage of GenAI becomes even more critical. Knowing what's at stake is a great place to start.

Dr. May Wang is CTO of IoT Security at Palo Alto Networks
Share this

Industry News

October 14, 2025

Tricentis unveiled its vision for the future of AI-powered quality engineering, a unified AI workspace and agentic ecosystem that brings together Tricentis’ portfolio of AI agents, Model Context Protocol (MCP) servers and AI platform services, creating a centralized hub for managing quality at the speed and scale of modern innovation.

October 14, 2025

Kong announced new support to help enterprises adopt and scale MCP and agentic AI development.

October 14, 2025

Copado unveiled new updates to its Intelligent DevOps Platform for Salesforce, bringing AI-powered automation, Org Intelligence™, and a new Model Context Protocol (MCP) integration framework that connects enterprise systems and grounds AI agents in live context without silos or duplication.

October 09, 2025

Xray announced the launch of AI-powered testing capabilities, a new suite of human-in-the-loop intelligence features powered by the Sembi IQ platform.

October 09, 2025

Redis announced the acquisition of Featureform, a framework for managing, defining, and orchestrating structured data signals.

October 09, 2025

CleanStart announced the expansion of its Docker Hub community of free vulnerability-free container images, surpassing 50 images, each refreshed daily to give developers access to current container builds.

October 08, 2025

The Cloud Native Computing Foundation® (CNCF®), which builds sustainable ecosystems for cloud native software, announced the graduation of Knative, a serverless, event-driven application layer on top of Kubernetes.

October 08, 2025

Sonatype announced the launch of Nexus Repository available in the cloud, the fully managed SaaS version of its artifact repository manager.

October 08, 2025

Spacelift announced Spacelift Intent, a new agentic, open source deployment model that enables the provisioning of cloud infrastructure through natural language without needing to write or maintain HCL.

October 07, 2025

IBM announced a strategic partnership to accelerate the development of enterprise-ready AI by infusing Anthropic’s Claude, one of the world’s most powerful family of large language models (LLMs), into IBM’s software portfolio to deliver measurable productivity gains, while building security, governance, and cost controls directly into the lifecycle of software development.

October 07, 2025

The Linux Foundation, the nonprofit organization enabling mass innovation through open source, announced its intent to launch the React Foundation.

October 07, 2025

Appvance announced a new feature in its AIQ platform: automatic generation of API test data and scripts directly from OpenAPI specifications using generative AI.

October 06, 2025

Mirantis announced availability of Mirantis OpenStack for Kubernetes (MOSK) 25.2 that simplifies cloud operations and strengthens support for GPU-intensive AI workloads as well as traditional enterprise applications.

October 06, 2025

Cycloid released a new model context protocol (MCP) compliant server that can undertake a range of platform actions, allowing users to interact with the MCP using natural language via an LLM (Large Language Model).

October 06, 2025

The Adaptavist Group announced the acquisition of D|OPS Digital, a DevSecOps consultancy that increases the efficiency and speed of software delivery.