Check Point® Software Technologies Ltd. announced its inclusion in Fast Company’s Next Big Things in Tech 2025 list.
In late July, the White House published "America's AI Action Plan," a 28-page document outlining the administration's goals for the creation and use of artificial intelligence in the United States. Through three pillars — Accelerate AI Innovation, Build American AI Infrastructure, and Lead in International AI Diplomacy & Security — this document describes high-level objectives and provides recommended policy initiatives for federal agencies.
This plan is not a comprehensive prescription for a set of practices, so why worry about it now? Many of the recommendations in this document will be the basis for regulation and legislation over the next 12-to-18 months. Although the exact details remain to be seen, you can take steps now to head in the right direction and refine your approach as specific rules come into place. Getting a head start can be a great competitive advantage, especially if you're working with federal government clients.
Secure-By-Design Comes to AI
You may already be familiar with the Cybersecurity & Infrastructure Security Agency's (CISA) Secure By Design principles for software. This outlines steps that companies should take to incorporate security practices into the design, build, and delivery of software. The goal is to proactively address security concerns instead of reacting to breaches and other incidents.
Pillar II of America's AI Action Plan extends the Secure-By-Design approach to AI models as well. At a basic level, AI models are software plus data, which means if you're following a Secure-By-Design approach to your software, you're already well on your way. Just like your software is built with open source libraries as dependencies, you have external data dependencies. This could be in the form of public or proprietary databases, content scraped from websites like Wikipedia, code from forges like GitHub, and so on. These data sources have a lot of great information, and a lot of not-so-great information, too. You need to make sure that the data you use to train your models are accurate, reliable, and representative. AI models are only as good as the data they're trained on.
Just as important as checking the model input: checking the model output. America's AI Action Plan only covers this briefly, but using AI models with "real world" impacts — especially in medical and defense sectors — means you need guardrails against bad output. Because AI models are non-deterministic, you can't just write a few tests and call it a day. You'll need to take an adversarial approach. Think of all of the ways an attacker might get your model to crash, produce unsafe output, or leak training data. Picking the right kind of model for the problem at hand will help, too. In almost all cases, what you want is a small language model focused on your particular problem domain, not a general-purpose large language model.
Like with your software, you may be asked to bring receipts for AI training data. This means having provenance records to indicate how you source the training data. You'll want your data providers to attest to what steps they took to validate and secure the data, and the infrastructure used to ingest, store, and share the data.
Sharing (Security Data) Is Caring
Safety threats are not a trade secret. Sharing threat intelligence across organizations — both private- and public-sector — helps keep everyone safe. We see this happen in the financial services industry: bank security teams share information about attacks because they know that successful attacks have a ripple effect throughout the whole ecosystem.
America's AI Action Plan calls for the Department of Homeland Security to create an AI Information Sharing and Analysis Center (ISAC) to "promote the sharing of AI-security threat information and intelligence across US critical infrastructure sectors." The ISAC model has worked well for nearly three decades, with 28 industry-specific ISACs in the National Council of ISACs. While we wait for the AI-ISAC to spin up, you may want to join an ISAC specific to your industry, if one exists.
Information sharing isn't just for the government and other practitioners through an ISAC. Your customers, especially government agencies, will want to know about the vulnerabilities in your models (and the rest of your software, too). They need this information to help protect themselves. You can expect to see new disclosure guidelines that cover when and how to disclose vulnerabilities, along with the level of detail required. To set your customers' minds at ease, you should also begin the practice of ingesting Vulnerability Exploitability eXchange (VEX) documents from your upstreams and producing them for your software and models. VEX documents provide information about how software is susceptible to a particular vulnerability, and, critically, how it's not. With VEX statements, you can provide machine-readable information to say, for example, "this software contains a particular vulnerability, but it is not exploitable because we don't call the affected code path."
Getting Started Is the Best Way to Start
The general directions in this article are a good starting point to meeting the most likely requirements that will spring from America's AI Action Plan. They're also good practices to follow anyway. Getting started now gets you further down the road. Every improvement you make to your security posture makes it that much harder for adversaries to get in.
Industry News
Kong announced the native availability of Kong Identity within Kong Konnect, the unified API and AI platform.
Amazon Web Services (AWS) is introducing a new generative AI developer certification, expanding its portfolio for professionals seeking to develop their cloud engineering skills.
Kong unveiled KAi, a new agentic AI co-pilot for Kong Konnect, the unified API and AI platform.
Azul and Cast AI announced a strategic partnership to help organizations dramatically improve Java runtime performance, reduce the footprint (compute, memory) of cloud compute resources and ultimately cut cloud spend.
Tricentis unveiled its vision for the future of AI-powered quality engineering, a unified AI workspace and agentic ecosystem that brings together Tricentis’ portfolio of AI agents, Model Context Protocol (MCP) servers and AI platform services, creating a centralized hub for managing quality at the speed and scale of modern innovation.
Kong announced new support to help enterprises adopt and scale MCP and agentic AI development.
Copado unveiled new updates to its Intelligent DevOps Platform for Salesforce, bringing AI-powered automation, Org Intelligence™, and a new Model Context Protocol (MCP) integration framework that connects enterprise systems and grounds AI agents in live context without silos or duplication.
Xray announced the launch of AI-powered testing capabilities, a new suite of human-in-the-loop intelligence features powered by the Sembi IQ platform.
Redis announced the acquisition of Featureform, a framework for managing, defining, and orchestrating structured data signals.
CleanStart announced the expansion of its Docker Hub community of free vulnerability-free container images, surpassing 50 images, each refreshed daily to give developers access to current container builds.
The Cloud Native Computing Foundation® (CNCF®), which builds sustainable ecosystems for cloud native software, announced the graduation of Knative, a serverless, event-driven application layer on top of Kubernetes.
Sonatype announced the launch of Nexus Repository available in the cloud, the fully managed SaaS version of its artifact repository manager.
Spacelift announced Spacelift Intent, a new agentic, open source deployment model that enables the provisioning of cloud infrastructure through natural language without needing to write or maintain HCL.