The 3rd ARTMAN workshop co-located with ACM CCS 2025 (October 13, 2025 -- Taipei, Taiwan)
Overview
This workshop aims at bringing together academic researchers and industrial practitioners from different domains with diverse expertise (mainly security & privacy and artificial intelligence (AI)/machine learning (ML), but also from application domains) to collectively explore and discuss the topics about resilient and trustworthy machine learning-powered applications and systems, share their views, experiences, and lessons learned, and provide their insights and perspectives, so as to converge on a systematic approach to securing them.
Topics of Interest
This workshop will be focused on the resilience and trustworthiness of AI/ML-driven systems. Resilience refers to the ability of an AI/ML system to maintain required capability and expected performance in the face of adversity, covering both dependability (accidental failures) and security (intentional attacks) issues. Trustworthiness refers to the attribute that an AI/ML system provides confidence to users of their capabilities and reliability in performing given tasks.
Topics of interest include (but are not limited to):
- Threat modeling and risk assessment of ML systems and applications in intelligent systems, including, but not limited to, anomaly detection, failure prediction, root cause analysis, incident diagnosis
- Data-centric attacks and defenses of ML systems and applications in intelligent systems, such as model evasion via targeted perturbations in testing samples, data poisoning in training examples
- Adversarial machine learning, including adversarial examples of input data and adversarial learning algorithms developed for intelligent systems
- ML robustness: testing, simulation, verification, validation, and certification of robustness of ML pipelines (not only ML algorithms and models) in intelligent systems, including but not limited to data-centric analytics, model-driven methods, and hybrid methods
- AI system safety: dependability topics related to AI system development and deployment environments, including hardware, ML platform and framework, software
- Trust in AI systems and applications, this mainly explores the trust issues arising from the interactions between human users and AI systems (e.g., Man-Machine Symbiosis, Human-Machine Teaming), with a particular focus on interpretable, explainable, accountable, transparent, and fair AI systems and applications in intelligent systems
- Resilience by reaction: Leveraging AI/ML algorithms, especially knowledge-informed models, to improve resilience and trust of intelligent systems
- Machine unlearning: measures to protect users' privacy against ML-based threats
- Sustainable AI: usable and robust small AI models; privacy-aware distillation or compression techniques; robust and trustworthy Federated Learning, trustworthy AI agents and embodied AI
Programme
The workshop will take place in room 201A.
Time | Duration | Session |
---|
Session 1: Keynote 1 |
---|
09:00-10:00 | 60 mns | Make trust the center of attention by Dr. CK Chen (CyCraft) |
10:00-10:30 | 30 mns | Break |
Session 2: Technical session on securing AI |
---|
10:30-11:00 | 30 mns | Providing Certified Triggers in Trigger Set-Based Watermarking for Ownership Validation against Image Transformation |
11:00-11:30 | 30 mns | Mitigating Membership Inference Vulnerability in Iterative Federated Clustering Algorithm |
11:30-13:30 | 120 mns | Lunch break |
Session 3: Keynote 2 |
---|
13:30-14:30 | 60 mns | Keynote 2: Computational Social Security: Uncovering Threats and Influence on Social Media by Pr. Ming-Hung Wang (National Chung Cheng University) |
Session 4: Technical session on cybersecurity applications |
---|
14:30-15:00 | 30 mns | Lightweight, Architecture-Agnostic IoT Malware Detection via Printable Strings |
15:00-15:30 | 30 mns | Break |
15:30-16:00 | 30 mns | Towards Explainable Machine Learning NIDS Reflecting Cyber-Attack Characteristics |
16:00-16:20 | 20 mns | Cross-Architecture IoT Malware Analysis Using P-Code Intermediate Representation |
16:20-16:40 | 20 mns | UAgent: Adversarial Co-evolution for Targeted Bug Revelation in Unit Testing |
16:40-17:00 | 20 mns | ARTMAN '25 closing remarks by Takeshi Takahashi |
The full schedule is available on
ACM CCS '25 website.
Keynotes
Make trust the center of attention
by Dr. CK Chen (CyCraft)
Abstract
This presentation investigates methodologies for enhancing Large Language Model (LLM) safety through contextual alignment, drawing from an industrial perspective. Our research originates from the development of a domain-specific LLM, where we observed that mechanisms designed to improve task performance also fortified model safety and alignment. In the wide range of applications, alignment is diverse according to the context. With domain knowledge, there are domain-dedicated tokens that provide more value for solving domain specific problems, and it is expected these tokens should be the center of focus. We introduce two distinct approaches for enforcing this alignment:
- White-Box Approach: For models with accessible internal states, we demonstrate a method to identify attention heads that consistently focus on domain-specific tokens. We show that anomalous activation patterns deviating from this specialized set of heads serve as a reliable indicator of potential attacks or out-of-domain misuse.
- Black-Box Approach: For models treated as an opaque system, we propose an inference-based technique. This method infers the model's attentional focus by monitoring the output probability drop by masking domain-specific tokens. A statistically significant decrease in these probabilities suggests a contextual drift, signaling a potential security risk.
In the end, we will share empirical findings from real-world penetration testing of LLM applications
Computational Social Security: Uncovering Threats and Influence on Social Media
by Pr. Ming-Hung Wang (National Chung Cheng University)
Abstract
This talk highlights how computational techniques are advancing social security by uncovering digital threats and influence operations. Through case studies, we demonstrate how image analysis reveals online propaganda strategies, and how deep learning detects visual and textual cues of manipulation. We further present a multimodal stance detection approach that combines content and user behavior to identify political alignment and coordinated campaigns. These methods offer practical tools for combating misinformation, safeguarding public discourse, and enhancing the resilience of democratic societies in the digital era.
Submission Guidelines
Papers can be submitted in two categories: regular and short ones.
- Regular workshop paper submissions must be at most 10 pages in double-column ACM format, excluding the bibliography and well-marked appendices, and at most 12 pages overall.
- Short papers are limited to 6 pages in total (4 pages without bibliography and appendices).
- Papers should be prepared in ACM format using latex. Please follow the main CCS formatting instructions to prepare the submissions. The sigconf template is available here.
- All submissions must be in English and properly anonymized.
- All the accepted papers (both regular and short versions) will be included in the proceedings and published by the ACM Digital Library and/or ACM Press.
Please note that TPC members are not required to read the appendices, so the paper should be intelligible without them.
Submission Link
HotCRP
Important Dates
- Submission Deadline:
June 20, 2025 AoE July 3, 2025 AoE (extended) - Acceptance Notification:
August 8, 2025 AoE August 15, 2025 AoE (extended) - Camera-Ready Deadline:
August 22, 2025 September 3, 2025 (extended) - Workshop Day:
October 17, 2025 October 13, 2025
Visa Request for Workshop Participants
ACM CCS 2025 is being held in Taiwan, so foreign participants may require a visa to travel to the conference and workshops. Official visa information can be obtained from Bureau of Consular Affairs, Ministry of Foreign Affairs. We would like to inform any author submitting work to ARTMAN to check their status and follow the instructions found here.
Organizing Committee
Program Chairs
- Gregory Blanc (Telecom SudParis, Institut Polytechnique de Paris, France)
- Takeshi Takahashi (National Institute of Information and Communications Technology, Japan)
- Zonghua Zhang
TPC Members
Muhamad Erza Aminanto (Monash University, Indonesia)Agathe Blaise (Thales, France)Laurent Bobelin (INSA Centre Val de Loire, France)Andrea Ceccarelli (University of Florence, Italy)Alessandro Erba (Karlsruhe Institute of Technology, Germany)Pierre-François Gimenez (Inria, France)Yufei Han (Inria, France)Shouling Ji (Zhejiang University, China)Satoru Koda (Fujitsu, Japan)Frédéric Majorczyk (DGA, France)Andres Molina-Markham (The MITRE Corporation, USA)Antonio Muñoz (University of Malaga, Spain)Gustavo Sánchez Collado (Karlsruhe Institute of Technology, Germany)Balachandra Shanabhag (Cohesity, USA)Shreya Sharma (Meta, USA)Pierre-Martin Tardif (Université de Sherbrooke, Canada)Wei Wang (Xi'an Jiaotong University, China)Fredrik Warg (RISE Research Institutes of Sweden)Akira Yamada (Kobe University, Japan)This workshop is co-located with the ACM CCS 2025 conference and is partially supported by the GRIFIN project (ANR-20-CE39-0011).