Skip to main content

Petar Radanliev

Personal photo - Petar Radanliev

Petar Radanliev

Professional Masters Programme Project Supervisor

Room 465, Wolfson Building, Parks Road, Oxford OX1 3QD
United Kingdom

Biography

Dr Petar Radanliev

Position: Professional Masters Programme Project Supervisor, AI and Cybersecurity
Department: Computer Science, University of Oxford
Contact:
ORCID: 0000-0001-5629-6857
Google Scholar: Link to Profile


Biography

I am a faculty member and Professional Masters Programme Project Supervisor at the University of Oxford, specialising in AI security, adversarial machine learning, and digital resilience. In parallel, I serve as a Research Associate at the Alan Turing Institute, where I contribute to the Trustworthy Digital Infrastructure (TDI) programme, advancing research on the security, governance, and resilience of AI systems underpinning national digital identity frameworks and critical infrastructure.

My research and teaching focus on the development and deployment of secure, trustworthy, and ethical artificial intelligence systems in real-world contexts. I supervise postgraduate projects in AI and cybersecurity, integrating cutting-edge research into hands-on learning environments that prepare students to address emerging threats in AI ecosystems.

Before my current appointment, I held postdoctoral research roles at Oxford, Cambridge, Imperial College London, and MIT, where I led and contributed to projects on AI-enabled threat detection, secure cyber-physical systems, and responsible AI deployment. I was awarded the Fulbright Fellowship at MIT and the University of North Carolina, and I am a recipient of the Prince of Wales Innovation Scholarship.

My PhD, awarded by the University of Wales in 2014, focused on AI-driven vulnerability detection in critical software supply chains. Since then, my work has addressed AI-specific risks, including adversarial inputs, data poisoning, model inversion, and algorithmic opacity. I have developed frameworks for AI penetration testing, ethical AI red teaming, and quantum-resilient AI architectures.

I am a frequently invited speaker at major cybersecurity conferences, including DEF CON, RSA, and Black Hat, and I have led professional training courses for Pearson and O’Reilly on topics such as algorithmic red teaming, AI malware analysis, and quantum AI security. My publications include over 100 peer-reviewed articles and several books, including Post-Quantum Security for AI and Beyond the Algorithm: AI, Security, Privacy, and Ethics.

My work is focused on securing the future of AI systems against sophisticated threats, while promoting ethical governance and secure-by-design engineering in AI development.

Teaching

I actively teach and supervise in the areas of AI security and responsible AI practices. Some of my recent teaching engagements include:

Books

  • Quantum-Resistant Defences for Digital Security: How to endure a Technological Singularity event triggered by Artificial General Intelligence (2025) - Link to Book
  • The Rise of AI Agents: Integrating Artificial Intelligence, Blockchain Technologies, and Quantum Computing (2024) - Link to Book
  • Beyond the Algorithm: AI Security, Privacy, and Ethics (2024) - Link to Book
  • AI-Powered Digital Cyber Resilience (2025) - Link to Book

Research Interests

My research is dedicated to advancing the security and resilience of artificial intelligence systems, with a focus on adversarial machine learning, AI-specific threat modelling, and red teaming methodologies. I develop frameworks for detecting, mitigating, and anticipating AI-enabled cyber-attacks—including evasion, poisoning, model inversion, and prompt injection—across the AI lifecycle, from data pipelines and model training to deployment and inference.

I am particularly interested in algorithmic red teaming for evaluating model robustness under real-world attack scenarios, designing secure architectures for distributed AI systems, and developing observability tools for detecting anomalies and tampering in high-risk environments. My work also investigates the intersection of AI security and governance, including regulatory approaches to mitigating systemic risks from foundation models and autonomous agents.

Recent projects include the design of AI penetration testing methodologies aligned with MITRE ATLAS, the integration of ethical and secure-by-design principles into AI development workflows, and the creation of resilience frameworks for national digital infrastructure threatened by adversarial AI capabilities.

To date, I have published over 100 peer-reviewed articles and authored several books on AI security and adversarial risk, contributing to both the theoretical foundations and practical defences of secure AI. My ongoing research aims to ensure that the next generation of intelligent systems are robust, trustworthy, and capable of withstanding evolving cyber threats.

Thank you for your interest in my work. For more details, please visit my Google Scholar profile, ORCID Profile 0000-0001-5629-6857, Scopus Author ID: 57003734400, Loop profile: 839254ResearcherID: L-7509-2015

Awards and Recognitions

In 2010, I was awarded the Prince of Wales Innovation Scholarship for my research on software supply chain cybersecurity. Link to Scholarship

In 2017, I was awarded the Fulbright Visiting Fellowship, for my collaborative research in cybersecurity at MIT and the University of Cambridge. Link to Fulbright Award

Selected Publications

  1. Future developments in cyber risk assessment for the internet of things (2018) - A new quantitative model for measuring market cyber risk.
  2. Artificial Intelligence and Quantum Cryptography (2024) - Proposing new cryptographic protocols for secure communication in the post-quantum era.
  3. Digital Security by Design (2024) - Exploring integrated frameworks for secure digital systems.
  4. Ethics and Responsible AI Deployment (2024) - Discussing the ethical considerations in AI deployment.
  5. Cyber Diplomacy: Defining the Opportunities and Risks (2024) - Examining the cyber risks from AI, IoT, blockchain, and quantum computing integration.
  6. AI and the Next Paradigm Shift (2024) - Reflecting on past AI developments and future shifts.
  7. Capability Hardware Enhanced Instructions in AI Systems (2024) - Analysing cybersecurity threats in new AI systems.
  8. AI security and cyber risk in IoT systems (2024) - Cyber risk assessment that incorporates IoT risks through dependency modeling.
  9. Digital twins: artificial intelligence and the IoT cyber-physical systems in Industry 4.0 (2022) - Virtual representation operating as a real-time digital counterpart of a physical object or process (i.e., digital twin).
  10. Artificial intelligence in cyber physical systems (2021) - New hierarchical cascading framework for analysing the evolution of AI decision-making in cyber physical systems. 
  11. The Rise and Fall of Cryptocurrencies (2024) - A comprehensive analysis of blockchain risks and opportunities in the Metaverse.

Professional Memberships

Hobbies and Interests

Exploring new and emerging blockchain projects (Airdrop Hunter: check my YouTube Channel on Airdrop Hunting), AI security (check my YouTube Channel on AI Security), public speaking (Toastmasters), and enjoying nature in Oxford Christchurch and Port meadows and Wytham Woods.

Activities