Research Lab on AI and Hardware Security

In this research project, students will investigate cutting-edge challenges at the intersection of machine learning, security, and system robustness. Participants may choose from several current research topics spanning model alignment, AI safety, and hardware-level attacks on modern accelerators. After selecting a topic, students will join an active research group and receive close guidance from an experienced supervisor throughout the project. Possible research directions include:

  • Robust Safety Alignment Against Neuron-Pruning Jailbreaks
  • GPU Rowhammer Attacks on Mixture-of-Expert (MoE) LLMs
  • Human-Guided Generative AI for High-Performance Hardware Fuzzing * Exploiting and Mitigating Vulnerabilities in Human-in-the-Loop Self-Learning LLM Agents
  • RAG Poisoning for LLMs Students will gain hands-on experience designing experiments, analyzing security vulnerabilities, developing mitigation strategies, and communicating research findings. This project is ideal for students interested in AI safety, adversarial machine learning, systems security, or large-scale neural network architectures.

hardware/software security. Familiarity with Python, deep learning frameworks, or system-level programming is beneficial. Most importantly, students should bring a strong curiosity and a passion for pushing the boundaries of modern security

Additional Information

Supervisor

Prof. Dr.-Ing. Ahmad-Reza Sadeghi
Contact at Department Dr.-Ing. Phillip Rieger
Availability Spring, Summer, Fall 2026
Capacity 6 Students
Credits 18 ECTS
Remote Option yes