Check Point® Software Technologies Ltd. announced its inclusion in Fast Company’s Next Big Things in Tech 2025 list.
AI is appearing everywhere in software development, from chatbots to code generation in internal tools. But while adoption is climbing, oversight often isn't. Teams are experimenting with large language models (LLMs) and Model Context Protocol (MCP) across organizations without clear guidelines or shared infrastructure, and that's a problem. This is especially urgent given interest in deploying AI agents as quickly as possible.
One of the fastest-growing technologies in these early deployments is MCP — a lightweight, open protocol designed to standardize the way applications provide context to AI agents and LLMs. Think of it like USB-C for AI: a universal way to plug intelligent agents into the services and systems they need to understand and interact with. Whether you're connecting to local data sources or remote APIs, MCP enables a clean, secure and flexible approach for AI to integrate with the real world without writing bespoke integrations every time. Since its introduction in the second half of 2024, MCP has emerged as the API standard for AI agents.
According to a recent survey of developers,engineers and IT leaders, conducted by Kong, 72% of enterprises plan to increase spending on generative AI (GenAI) this year. Nearly 40% expect to spend over $250,000. Yet 44% of those respondents say governance and security are their most significant barriers to adoption.
So while organizations are eager to get AI into production, they'll be slowed down if they lack a way to manage traffic, improve and monitor performance, and ensure enterprise-grade security. We anticipate that agents will make things even more challenging given that they'll need a degree of autonomy to make decisions within certain parameters to accomplish their goals.
AI Sprawl Is Real
In many companies, AI use isn't centrally coordinated. One team might spin up a chatbot with OpenAI. Another could be testing code suggestions with Google's Gemini models. Yet another group may be using DeepSeek or open-source models for internal agents.
This scattered approach quickly becomes cumbersome. Prompts aren't logged, models aren't compared, data flows to third-party services without oversight and developers are left to troubleshoot LLMs without context or tools.
That's unsustainable, especially as AI becomes part of critical systems and customer-facing products.
Why DevOps Teams Should Care
DevOps isn't just about deployment pipelines anymore. It's about making sure the systems we build are reliable, observable and secure. AI changes the shape of those systems, but not the fundamentals.
Many of the same problems DevOps already solves, like automation, performance monitoring and access control, now apply to AI. But they need new tooling.
Imagine trying to debug why an LLM gave an inadequate response without visibility into which model was used, what data it saw or how the prompt was structured. Or imagine discovering your app exposed PII because a prompt slipped through without sanitization. These are real risks.
The Infrastructure Gap
Currently, most companies don't have shared infrastructure to handle LLM traffic like they manage APIs or services. That means developers are building their own one-off wrappers, logging tools and access policies, or skipping those steps entirely.
This approach slows teams down in the long run. Without standard systems for governance, observability or routing, companies lose visibility, increase risk and make AI harder to scale.
What's missing is a foundational layer, one that helps route traffic to the right model, track how prompts are used, apply consistent policies and give platform teams the controls they need.
Additionally, AI traffic is not just driven by LLM consumption, but also by consuming MCP servers and other APIs that determine how capable our agents will be of accessing data and services once they go into production.
Governance and security therefore extends to that traffic as well, since while LLMs drive agentic intelligence, MCP and APIs drive how capable the agents will be when harnessing that intelligence.
What Comes Next?
The good news is that this isn't a new kind of problem. It's just showing up in a different way. DevOps teams have already built the culture and skills to manage fast-moving technology with discipline. Now, it's time to apply that same thinking to AI.
Start by treating LLM usage like any other critical service:
■ Log and monitor AI traffic
■ Apply security and access controls
■ Track model usage across teams
■ Standardize how teams prompt, test and deploy AI-driven features
As AI regulation remains in limbo, companies can take action now. The ones that do will be better prepared to scale AI without sacrificing security or performance.
Because when AI breaks, the fix isn't just better prompts. It's better systems.
Industry News
Kong announced the native availability of Kong Identity within Kong Konnect, the unified API and AI platform.
Amazon Web Services (AWS) is introducing a new generative AI developer certification, expanding its portfolio for professionals seeking to develop their cloud engineering skills.
Kong unveiled KAi, a new agentic AI co-pilot for Kong Konnect, the unified API and AI platform.
Azul and Cast AI announced a strategic partnership to help organizations dramatically improve Java runtime performance, reduce the footprint (compute, memory) of cloud compute resources and ultimately cut cloud spend.
Tricentis unveiled its vision for the future of AI-powered quality engineering, a unified AI workspace and agentic ecosystem that brings together Tricentis’ portfolio of AI agents, Model Context Protocol (MCP) servers and AI platform services, creating a centralized hub for managing quality at the speed and scale of modern innovation.
Kong announced new support to help enterprises adopt and scale MCP and agentic AI development.
Copado unveiled new updates to its Intelligent DevOps Platform for Salesforce, bringing AI-powered automation, Org Intelligence™, and a new Model Context Protocol (MCP) integration framework that connects enterprise systems and grounds AI agents in live context without silos or duplication.
Xray announced the launch of AI-powered testing capabilities, a new suite of human-in-the-loop intelligence features powered by the Sembi IQ platform.
Redis announced the acquisition of Featureform, a framework for managing, defining, and orchestrating structured data signals.
CleanStart announced the expansion of its Docker Hub community of free vulnerability-free container images, surpassing 50 images, each refreshed daily to give developers access to current container builds.
The Cloud Native Computing Foundation® (CNCF®), which builds sustainable ecosystems for cloud native software, announced the graduation of Knative, a serverless, event-driven application layer on top of Kubernetes.
Sonatype announced the launch of Nexus Repository available in the cloud, the fully managed SaaS version of its artifact repository manager.
Spacelift announced Spacelift Intent, a new agentic, open source deployment model that enables the provisioning of cloud infrastructure through natural language without needing to write or maintain HCL.