Our research group conducts fundamental research at the intersection of computer security and machine learning. On the one end, we are interested in developing intelligent systems that can learn to protect computers from attacks and identify security problems automatically. On the other end, we explore the security and privacy of machine learning by developing novel attacks and defenses.
We are part of the Berlin Institute for the Foundations of Learning and Data (BIFOLD) at Technische Universität Berlin. Previously, we have been working at Technische Universität Braunschweig and the University of Göttingen.
October 15, 2025 — We are thrilled to receive the Distinguished Paper Award at CCS for our work on manipulating weather forecasts of AI models 🏆.
October 13, 2025 — We are attending CCS in Taipei, 🇹🇼. Erik is presenting our work on manipulating weather forecasts of AI models, and Anna is presenting a workshop paper on threat modeling for cloud applications.
October 1, 2025 — We are starting the winter semester with new courses, including our lecure on adversarial machine learning and projects on AI attacks and defenses. Register in the ISIS platform 📚.
August 13, 2025 — We are attending the USENIX Security Symposium in Seattle, 🇺🇸. Felix is presenting our paper on attacking virtual backgrounds in video calls.
See all news and updates of the research group.
AML — Adversarial Machine Learning
This integrated lecture is concerned with adversarial machine learning. It explores various attacks on learning algorithms, including white-box and black-box adversarial examples, poisoning, backdoors, membership inference, and model extraction. It also examines the security and privacy implications of these attacks and discusses defensive strategies, ranging from threat modeling to integrated countermeasures.
This lab is a hands-on course that explores machine learning in computer security. Students design and develop intelligent systems for security problems such as attack detection, malware clustering, and vulnerability discovery. The developed systems are trained and evaluated on real-world data, providing insight into their strengths and weaknesses in practice. The lab is a continuation of the lecture "Machine Learning for Computer Security" and thus knowledge from that course is expected.
See all teaching course.
LLM-based Vulnerability Discovery through the Lens of Code Metrics.
48th IEEE/ACM International Conference on Software Engineering (ICSE), 2026. (to appear)
Manipulating Feature Visualizations with Gradient Slingshots.
Advances in Neural Information Processing Systems 39 (NeurIPS), 2025. (to appear)
Adversarial Observations in Weather Forecasting.
32nd ACM Conference on Computer and Communications Security (CCS), 2025.
Distinguished Paper Award
Tiny Sensors, Big Threats: Assessing Motion Sensor-based Fingerprinting in Mobile Systems.
27th International Conference on Modeling, Analysis and Simulation of Wireless and Mobile Systems (MSWiM), 2025.
See all publications of the research group.
AIGENCY — Opportunities and Risks of Generative AI in Security
The project aims to systematically investigate the opportunities and risks of generative artificial intelligence in computer security. It explores generative models as a new tool as well as a new threat. The project is joint work with Fraunhofer AISEC, CISPA, FU Berlin, and Aleph Alpha.
MALFOY — Machine Learning for Offensive Computer Security
The ERC Consolidator Grant MALFOY explores the application of machine learning in offensive computer security. It is an effort to understand how learning algorithms can be used by attackers and how this threat can be effectively mitigated.
ALISON — Attacks against Machine Learning in Structured Domains
The goal of this project is to investigate the security of learning algorithms in structured domains. That is, the project develops a better understanding of attacks and defenses that operate in the problem space of learning algorithms rather than the feature space.
See all projects of the research group.
Technische Universität Berlin
Machine Learning and Security, TEL 8-2
Hardenbergstr. 40A
10623 Berlin, Germany
Office: office@mlsec.tu-berlin.de
Responsibility under the German Press Law §55 Sect. 2 RStV:
Prof. Dr. Konrad Rieck