When: February 16-18, 2026
Where: UCT, Cape Town, South Africa
We are inviting applications from graduate students and researchers in the areas of Computer Science and Cybersecurity with a focus on AI. During our annual scientific event, students will have the opportunity to follow one week of scientific talks and workshops, present their own work during poster sessions and discuss relevant topics with fellow researchers and expert speakers. This year's edition is in collaboration with UCT, Cape Town and will take place in South Africa.
Application Process: Please fill in our online application form.
Notification of Acceptance: We will notify you via email.
Fee: none
Deadline for Regular Application: February 09, 2026.
Daniel Arp, TU Wien
Title: Pitfalls in AI for Security
Abstract: Advances in computational power and the proliferation of massive datasets have propelled artificial intelligence (AI) to achieve major breakthroughs across a wide spectrum of applications—from image recognition and natural language processing to autonomous systems and scientific discovery. Yet when AI techniques are applied to security, they encounter a host of subtle pitfalls that can seriously undermine performance and, in the worst case, render learning-based systems unsuitable for real-world deployment. In this lecture, we will take an in-depth look at these pitfalls and explore how they manifest in various security domains, such as malware detection and vulnerability discovery, where they frequently lead to inflated assessments of system effectiveness. We’ll survey illustrative case studies drawn from academic literature to gauge the prevalence of these issues and, finally, discuss recommendations for avoiding them when designing experiments.
Bio: Daniel Arp is a tenure-track Assistant Professor in the Security and Privacy Research Unit at Technische Universität Wien. Previously, he held a postdoctoral research position at TU Berlin and a visiting research position at University College London and King’s College London. He received his Ph.D. with honours in Computer Science from TU Braunschweig. Additionally, he holds a master’s degree in Computer Engineering from TU Berlin. Daniel’s research interests encompass the development of learning-based methodologies to fortify the security and privacy of systems.
Kathrin Grosse, IBM
Title: From AI Vulnerabilities to AI Security Incident Reporting and Beyond
Abstract: In this talk, we revisit the evidence of vulnerabilities and exploits within the realm of Artificial Intelligence, encompassing both traditional AI and Large Language Models (LLMs). Such vulnerabilities necessitate prevention, which we suggest could be handled by incident reporting. Such a procedure has been established in non-AI security - yet AI security warrants special treatment due to AI being versatile, and AI models differ significantly from software with vulnerabilities. However, a significant challenge is not just the lack of a standardized reporting framework, but also a knowledge gap among practitioners. Even when they are aware of the risks, many lack the practical guidance needed to effectively evaluate and secure their models. Our discussion will thus also cover how to threat model real-world applications using AI.
Bio: In this talk, we revisit the evidence of vulnerabilities and exploits within the realm of Artificial Intelligence, encompassing both traditional AI and Large Language Models (LLMs). Such vulnerabilities necessitate prevention, which we suggest could be handled by incident reporting. Such a procedure has been established in non-AI security - yet AI security warrants special treatment due to AI being versatile, and AI models differ significantly from software with vulnerabilities. However, a significant challenge is not just the lack of a standardized reporting framework, but also a knowledge gap among practitioners. Even when they are aware of the risks, many lack the practical guidance needed to effectively evaluate and secure their models. Our discussion will thus also cover how to threat model real-world applications using AI.
Sofía Celi, Brave
Godwin Mandinyenya, UCT
Title: AI-driven Zero-trust Models for Healthcare Systems
Abstract: Healthcare systems face growing cyber risks that exceed the capabilities of perimeter-based security. This work explores the potential of AI-driven Zero-Trust models for healthcare environments, focusing on continuous authentication, adaptive trust assessment, and context aware access control. The work outlines key architectural principles, opportunities, and challenges of integrating artificial intelligence into Zero-Trust healthcare security. The contribution is a conceptual foundation for future research on adaptive, resilient, and compliance-aware cybersecurity models for healthcare systems.
Bio: Dr Godwin Mandinyenya is a Post-Doctoral Fellow in the Department of Information Systems at the University of Cape Town. His doctorate studies focused on blockchain security, artificial intelligence, and data privacy. Currently he is conducting research on cybersecurity frameworks, digital inclusion, and risk mitigation in rural and disadvantaged educational environments. With over 11 years of teaching and supervision experience, he has published extensively in IEEE, ACM and Springer. His research interests include zero-trust security models, healthcare cybersecurity, distributed systems and ethical AI-blockchain integration.
Abstract: Generative AI (genAI) is becoming more integrated into our daily lives, raising questions about potential threats within these systems and their outputs. In this talk, we will examine the security challenges and threats associated with generative AI. This includes the deception of humans with generated media and the deception of machine learning systems. In the first part of the talk, we look at threat scenarios in which generative models are utilized to produce content that is impossible to distinguish from human-generated content. This fake content is often used for fraudulent and manipulative purposes. As generative models evolve, the attacks are easier to automate and require less expertise, while detecting such activities will become increasingly difficult. This talk will provide an overview of our current challenges in detecting fake media in human and machine interactions and the effects of genAI media labeling on consumers' trust. The second part will cover exploits of LLMs to disrupt alignment or to steal sensitive information. Existing attacks demonstrate that LLM content filters can be easily bypassed with specific inputs, leading to the leakage of private information. From an alternative perspective, we demonstrate that obfuscating prompts offers an effective way to protect intellectual property. Our research demonstrates that with minimal overhead, we can maintain similar utility while safeguarding confidential data, highlighting that defenses in foundation models may require fundamentally different approaches to utilize their inherent strengths.
More details will follow soon.
![]()
![]()
Please have a look at last year's school, 2024's Summer School on Usable Security , 2024 Summer School on Privacy-Preserving CryptographySummer School 2023, Summer School 2022, or our Digital Summer School 2021 to get a general idea of the event.
If you have any questions or queries for any of our summer schools, our Summer-School team will be glad to help via [email protected].
Please note that we are always publishing speakers and topics/titles on our website, as soon as they are confirmed. Please refrain from requesting titles and detailed topics etc. via e-mail. If you want to wait with your application until the detailed program is finished, that is perfectly fine. We just want to give interested students this opportunity to register early and secure their spot ahead of time.