This workshop aims at bringing together academic researchers and industrial practitioners from different domains with diverse expertise (mainly security & privacy and machine learning, but also from application domains) to collectively explore and discuss the topics about resilient and trustworthy machine learning-powered applications and systems, share their views, experiences, and lessons learned, and provide their insights and perspectives, so as to converge on a systematic approach to securing them.
Topics of interest include (but are not limited to):
Time | Duration | Session |
---|---|---|
08:30-08:45 | 15 mns | ARTMAN '24 opening remarks by Gregory BLANC, Takeshi TAKAHASHI and Zonghua ZHANG |
08:50-09:50 | 60 mns | Session 1: ML for Cybersecurity |
10:00-10:30 | 30 mns | Break |
10:30-11:30 | 60 mns | Keynote 1: Customized attacks and defense strategies for robust and privacy-preserving federated learning by Melek ÖNEN |
11:30-12:10 | 40 mns | Session 2: Robustness, privacy and safety for ML systems 1 |
12:10-13:30 | 80 mns | Lunch break |
13:30-14:30 | 60 mns | Keynote 2: Prospects and Limits of Explainable AI in Computer Security by Christian WRESSNEGGER |
14:30-15:10 | 40 mns | Session 3: Robusteness, privacy and safety for ML systems 2 |
15:10-15:40 | 30 mns | Break |
15:40-16:40 | 60 mns | Session 4: Attacks to ML algorithms |
16:40-17:00 | 20 mns | ARTMAN '24 closing remarks by Gregory BLANC, Takeshi TAKAHASHI and Zonghua ZHANG |
Accepted papers will be published by IEEE Computer Society Conference Publishing Services (CPS) and will appear in the Computer Society Digital Library and IEEE Xplore® in an ACSAC Workshops 2024 volume alongside the main ACSAC 2024 proceedings. ACSAC is currently transitioning to technical sponsorship by IEEE Computer Society's Technical Community on Security and Privacy (TCSP) and expect approval before the proceedings are compiled.
ACSAC is being held in the US, so participants from outside the US may require a visa to travel to the conference and workshops. Since the US visa process varies based on nationality, we would like to inform any author submitting work to ARTMAN to request a visa letter in advance by following the instructions found here. Authors outside the US should apply for a visa letter in anticipation of their work being accepted. However, the visa letter does not indicate that their work will be accepted in the workshop. It is to remediate the potential delay and last-minute requests that can impact their travel plans. We also encourage the organizers to make a decision on accepting papers as soon as possible to give authors requiring a visa to the US time to process one.
by Melek Önen
In this talk, we will review potential attacks against the robustness and privacy of federated learning with a particular interest in those customized to the actual setting or the underlying machine learning approach. We will first consider the existence of stragglers (slow, late-arriving clients) and their impact on the performance of the FL framework and study potential defense strategies. We will then focus on federated graph learning and explore dedicated attacks against and defenses for the privacy of the graph.
Melek Önen is a professor in the Digital Security department at EURECOM (Sophia-Antipolis, France). Her main research topics are applied cryptography, information security and privacy. Nowadays she has a particular interest in studying and designing trustworthy artificial intelligence. She was/is involved in many European and national French research projects.
by Christian Wressnegger
Learning-based systems effectively assist in various computer security challenges, such as preventing network intrusions, reverse engineering, vulnerability discovery, or detecting malware. However, modern (deep) learning methods often lack understandable reasoning in their decision process, making crucial decisions less trustworthy. Recent advances in "Explainable AI" (XAI) have turned the tables, enabling precise relevance attribution of input features for otherwise opaque models. This progression has raised expectations that these techniques can also benefit defense against attacks on computer systems and even machine learning models themselves. This talk explores the prospects and limits of XAI in computer security, demonstrating where it can and cannot (yet) be used reliably.
Christian Wressnegger is an Assistant Professor of computer science at the Karlsruhe Institute of Technology, heading the chair of "Artificial Intelligence & Security". Additionally, he is the speaker of the "KIT Graduate School Cyber Security" and PI at the "KASTEL Security Research Labs," one of three competence centers for cyber security in Germany. He holds a Ph.D. from TU Braunschweig and has graduated from Graz University of Technology, where he majored in computer science. His research revolves around combining the machine learning and computer security. For instance, he develops learning-based methods for attack detection or vulnerability discovery. More recently, he also focuses on machine learning methods' robustness, security, and interpretability. He is particularly visible for his work on the security of "Explainable AI" (XAI). He publishes his work in proceedings of highly prestigious conferences (e.g., IEEE S&P, USENIX Security, ACM CCS, and ISOC NDSS) and contributes to the community by serving on program committees and organizing conferences (e.g., PC-Chair at ARES 2020, the German national IT-Security conference "GI Sicherheit" 2022 and 2024, and the EuroS&P Workshops 2024 and 2025).
This workshop is co-located with the ACSAC 2024 conference and is partially supported by the GRIFIN project (ANR-20-CE39-0011).