ARTMAN -- Workshop on Recent Advances in Resilient and Trustworthy MAchine learniNg

ARTMAN 2024, workshop co-located with ACSAC 2024 (December 9, 2024 -- Waikiki)

Overview

This workshop aims at bringing together academic researchers and industrial practitioners from different domains with diverse expertise (mainly security & privacy and machine learning, but also from application domains) to collectively explore and discuss the topics about resilient and trustworthy machine learning-powered applications and systems, share their views, experiences, and lessons learned, and provide their insights and perspectives, so as to converge on a systematic approach to securing them.

Topics of Interest

Topics of interest include (but are not limited to):

  • Threat modeling and risk assessment of ML systems and applications in intelligent systems, including, but not limited to, anomaly detection, failure prediction, root cause analysis, incident diagnosis
  • Data-centric attacks and defenses of ML systems and applications in intelligent systems, such as model evasion via targeted perturbations in testing samples, data poisoning in training examples
  • Adversarial machine learning, including adversarial examples of input data and adversarial learning algorithms developed for intelligent systems
  • ML robustness: testing, simulation, verification, validation, and certification of robustness of ML pipelines (not only ML algorithms and models) in intelligent systems, including but not limited to data-centric analytics, model-driven methods, and hybrid methods
  • AI system safety: dependability topics related to AI system development and deployment environments, including hardware, ML platform and framework, software
  • Trust in AI systems and applications, this mainly explores the trust issues arising from the interactions between human users and AI systems (e.g., Man-Machine Symbiosis, Human-Machine Teaming), with a particular focus on interpretable, explainable, accountable, transparent, and fair AI systems and applications in intelligent systems
  • Resilience by reaction: Leveraging AI/ML algorithms, especially knowledge-informed models, to improve resilience and trust of intelligent systems

Programme

TimeDurationSession
08:30-08:4515 mnsARTMAN '24 opening remarks by Gregory BLANC, Takeshi TAKAHASHI and Zonghua ZHANG
08:50-09:5060 mnsSession 1: ML for Cybersecurity
10:00-10:3030 mnsBreak
10:30-11:3060 mnsKeynote 1: Customized attacks and defense strategies for robust and privacy-preserving federated learning by Melek ÖNEN
11:30-12:1040 mnsSession 2: Robustness, privacy and safety for ML systems 1
12:10-13:3080 mnsLunch break
13:30-14:3060 mnsKeynote 2: Prospects and Limits of Explainable AI in Computer Security by Christian WRESSNEGGER
14:30-15:1040 mnsSession 3: Robusteness, privacy and safety for ML systems 2
15:10-15:4030 mnsBreak
15:40-16:4060 mnsSession 4: Attacks to ML algorithms
16:40-17:0020 mnsARTMAN '24 closing remarks by Gregory BLANC, Takeshi TAKAHASHI and Zonghua ZHANG

Submission Guidelines

  • Submissions should be 6-10 pages excluding references and appendices, using double-column IEEE template available here with \documentclass[conference,compsoc]{IEEEtran}. 5 additional pages can be used for references and well-referenced appendices. Note that the reviewers are not expected to read these appendices.
  • All submissions must be anonymous, i.e., author names and affiliations should not be included. Authors can cite their work but must do so in the third person.
  • Accepted workshop papers will be published by IEEE Computer Society Conference Publishing Services (CPS), see below.

Submission Link

EasyChair

Important Dates

  • September 15, 2024: Submission Deadline (extended)
  • October 20, 2024: Acceptance Notification
  • November 1, 2024: Camera-Ready Paper Submission Deadline
  • December 9, 2024: Workshop

Publication

Accepted papers will be published by IEEE Computer Society Conference Publishing Services (CPS) and will appear in the Computer Society Digital Library and IEEE Xplore® in an ACSAC Workshops 2024 volume alongside the main ACSAC 2024 proceedings. ACSAC is currently transitioning to technical sponsorship by IEEE Computer Society's Technical Community on Security and Privacy (TCSP) and expect approval before the proceedings are compiled.

Visa Request for Workshop Participants

ACSAC is being held in the US, so participants from outside the US may require a visa to travel to the conference and workshops. Since the US visa process varies based on nationality, we would like to inform any author submitting work to ARTMAN to request a visa letter in advance by following the instructions found here. Authors outside the US should apply for a visa letter in anticipation of their work being accepted. However, the visa letter does not indicate that their work will be accepted in the workshop. It is to remediate the potential delay and last-minute requests that can impact their travel plans. We also encourage the organizers to make a decision on accepting papers as soon as possible to give authors requiring a visa to the US time to process one.

Keynotes

Customized attacks and defense strategies for robust and privacy-preserving federated learning

by Melek Önen

Abstract

In this talk, we will review potential attacks against the robustness and privacy of federated learning with a particular interest in those customized to the actual setting or the underlying machine learning approach. We will first consider the existence of stragglers (slow, late-arriving clients) and their impact on the performance of the FL framework and study potential defense strategies. We will then focus on federated graph learning and explore dedicated attacks against and defenses for the privacy of the graph.

Biography

Melek Önen is a professor in the Digital Security department at EURECOM (Sophia-Antipolis, France). Her main research topics are applied cryptography, information security and privacy. Nowadays she has a particular interest in studying and designing trustworthy artificial intelligence. She was/is involved in many European and national French research projects.

Prospects and Limits of Explainable AI in Computer Security

by Christian Wressnegger

Abstract

Learning-based systems effectively assist in various computer security challenges, such as preventing network intrusions, reverse engineering, vulnerability discovery, or detecting malware. However, modern (deep) learning methods often lack understandable reasoning in their decision process, making crucial decisions less trustworthy. Recent advances in "Explainable AI" (XAI) have turned the tables, enabling precise relevance attribution of input features for otherwise opaque models. This progression has raised expectations that these techniques can also benefit defense against attacks on computer systems and even machine learning models themselves. This talk explores the prospects and limits of XAI in computer security, demonstrating where it can and cannot (yet) be used reliably.

Biography

Christian Wressnegger is an Assistant Professor of computer science at the Karlsruhe Institute of Technology, heading the chair of "Artificial Intelligence & Security". Additionally, he is the speaker of the "KIT Graduate School Cyber Security" and PI at the "KASTEL Security Research Labs," one of three competence centers for cyber security in Germany. He holds a Ph.D. from TU Braunschweig and has graduated from Graz University of Technology, where he majored in computer science. His research revolves around combining the machine learning and computer security. For instance, he develops learning-based methods for attack detection or vulnerability discovery. More recently, he also focuses on machine learning methods' robustness, security, and interpretability. He is particularly visible for his work on the security of "Explainable AI" (XAI). He publishes his work in proceedings of highly prestigious conferences (e.g., IEEE S&P, USENIX Security, ACM CCS, and ISOC NDSS) and contributes to the community by serving on program committees and organizing conferences (e.g., PC-Chair at ARES 2020, the German national IT-Security conference "GI Sicherheit" 2022 and 2024, and the EuroS&P Workshops 2024 and 2025).

Organizing Committee

Program Chairs

  • Gregory Blanc (Telecom SudParis, Institut Polytechnique de Paris, France)
  • Takeshi Takahashi (National Institute of Information and Communications Technology, Japan)
  • Zonghua Zhang (CRSC R&D Institute, China)

TPC Members (to be completed)

  • Muhamad Erza Aminanto (Monash University, Indonesia)
  • Laurent Bobelin (INSA Centre Val de Loire, France)
  • Sajjad Dadkhah (University of New Brunswick, Canada)
  • Doudou Fall (Ecole Supérieure Polytechnique, Cheikh Anta Diop University, Senegal)
  • Joaquin Garcia-Alfaro (Telecom SudParis, Institut Polytechnique de Paris, France)
  • Pierre-François Gimenez (Inria, France)
  • Yufei Han (Inria, France)
  • Frédéric Majorczyk (DGA, France)
  • Ikuya Morikawa (Fujitsu, Japan)
  • Antonio Muñoz (University of Malaga, Spain)
  • Mehran Alidoost Nia (Shahid Beheshti University, Iran)
  • Misbah Razzaq (INRAE, France)
  • Gustavo Sánchez Collado (Karlsruhe Institute of Technology, Germany)
  • Balachandra Shanabhag (Cohesity, USA)
  • Toshiki Shibahara (NTT, Japan)
  • Pierre-Martin Tardif (Université de Sherbrooke, Canada)
  • Fredrik Warg (RISE Research Institutes of Sweden)
  • Akira Yamada (Kobe University, Japan)
  • This workshop is co-located with the ACSAC 2024 conference and is partially supported by the GRIFIN project (ANR-20-CE39-0011).