Speaker: Fabio Pierazzi, King's College London
Abstract:
Machine Learning (ML) has shown incredible success in many applications, including computer vision, and speech recognition. However, the use of ML can increase also the attack surface, and paves the way for attackers to compromise confidentiality, integrity or availability of ML systems.
This is especially relevant in hostile environments such as cybersecurity, where the attacker wants to evade detection and wants the ML system to malfunction. This brief course will provide you with an overview of the challenges and trends in assessing risks and robustness of applying machine learning, i.e., contexts that consider the presence of a hostile adversary, and that require modifications of complex objects (e.g., software). We will refer to malware detection as a main case study, but this applies to any hostile environment where adversaries could gain some benefit.
Researchers have published thousands of papers demonstrating the security and privacy risks of Machine Learning (ML) models, unfortunately without any definitive solution for robustness. Yet, ML models are being widely deployed commercially, with the latest examples being the Large Language Models and Generative AI. In the last part of the talk, we will reflect on the gaps between academic research and industry practice, to understand what are the key elements that lead to the deployment of ML models despite the risks identified by academia. A first, prominent aspect is related to a misalignment in threat models, where academics tend to focus on the security of a single ML component, whereas industry has much more complex pipelines that include also traditional security solutions such as signatures and manual inspections. After discussing the main gaps, we will examine trends in research to overcome these differences, on how we could develop more impactful research and deploy safer systems.
A brief outline of the topics is as follows: Introduction to Machine Learning in Hostile Environments, including Cybersecurity; Taxonomy of adversarial attacks; Attacks on XAI methods; Security of foundation models (Generative AI, Diffusion Models, LLMs); Defense directions.
Bio: Fabio Pierazzi is a Senior Lecturer in Computer Science and Deputy Head of the Cybersecurity group at the Department of Informatics of King’s College London, and will start as Associate Professor to University College London on November 1st, 2024. His research interests are at the intersection of systems security and machine learning, with a particular emphasis on settings in which attackers adapt quickly to new defenses (i.e., high non-stationarity, adaptive attackers). Homepage: https://fabio.pierazzi.com
Host: Prof. Luca Ferretti (luca.ferretti@unimore.it)