Modern infrastructure is dependent on many diverse service providers, whose individual vulnerabilities contribute to the risk model of the target systems. Adding artificial intelligence to the equation only complicates the issue further. Yet the critical services need a high level of resilience and standard compliance. In their paper “Supply Chain Security and AI Risk Governance Model for Critical Infrastructure under NIS2, CER, and CRA” Natalija Parlov, Gordan Akrap, and Josip Esterhajer provide an analysis of multiple security standards and present measures for assessment and systemic improvement of infrastructural resilience.
Read it on our website: https://www.acigjournal.com/Supply-Chain-Security-and-AI-Risk-Governance-Model-for-Critical-Infrastructure-under,211823,0,2.html
Natalija Parlov, Gordan Akrap, Josip Esterhajer, “Supply Chain Security and AI Risk Governance Model for Critical Infrastructure under NIS2, CER, and CRA.” At the top of the image the logos of ACIG and NASK can be seen. At the bottom there is a tagline “New article.”
Wenn KI-Assistenten Zugriff auf vertrauliche Daten haben, fremde Inhalte verarbeiten und selbst nach außen kommunizieren, entsteht ein „tödlicher Dreiklang“. Angreifer können die KI missbrauchen und Daten erbeuten.
🔒 Zum Schutz: Systeme so gestalten, dass mindestens ein Punkt ausgeschlossen ist.
Bild 2: Eine dunkelblaue Grafik im selben Design; in weißer Schrift steht:
Die Kombination aus diesen drei Punkten macht KI besonders angreifbar:
1️⃣ Zugriff auf vertrauliche Daten
2️⃣ Verarbeitung nicht-vertrauenswürdiger Inhalte
3️⃣ Möglichkeit zur externen Kommunikation
Cyberkriminelle nutzen Prompt Injections, um das Lethal Trifecta auszunutzen.
Das sind versteckte Anweisungen in Texten, die die KI dazu bringen, vertrauliche Infos auszulesen und heimlich zu verschicken.
Wie ihr euch schützt:
- Mindestens einen Faktor ausschalten:
z. B. KIs keine externen Kommunikationswege erlauben (keine E-Mails, Hyperlinks oder entfernten Inhalte).
- Sichere Systemarchitektur:
durchdachte Design-Muster (siehe BSI + ANSSI Paper), die KI gegen Angreifer absichern
Microsoft Copilot for SharePoint just made recon a whole lot easier. 🚨
One of our Red Teamers came across a massive SharePoint, too much to explore manually. So, with some careful prompting, they asked Copilot to do the heavy lifting...
It opened the door to credentials, internal docs, and more.
All without triggering access logs or alerts.
Copilot is being rolled out across Microsoft 365 environments, often without teams realising Default Agents are already active.
That’s a problem.
Jack, our Head of Red Team, breaks it down in our latest blog post, including what you can do to prevent it from happening in your environment.
Man, this whole AI hype train... Yeah, sure, the tools are definitely getting sharper and faster, no doubt about it. But an AI pulling off a real pentest? Seriously doubt that's happening anytime soon. Let's be real: automated scans are useful, but they just aren't the same beast as a genuine penetration test.
Honestly, I think security needs to be woven right into the fabric of a company from the get-go. It can't just be an afterthought you tack on when alarms are already blaring.
Now, don't get me wrong, AI definitely brings its own set of dangers – disinformation is a big one that springs to mind. But here's the thing: we absolutely have to get our heads around these tools and figure them out. If we don't keep pace, we risk becoming irrelevant pretty quick.
So, curious to hear what you all think – where do the greatest pitfalls lie with AI in the security field? What keeps you up at night?