The AsiaCCS 2024 Workshop on Secure and Trustworthy Deep Learning Systems (SecTL) is curated as a premier platform for researchers, engineers, and professionals from academia, government, and industry. Its purpose is to facilitate the exchange of ideas, showcase emerging research, and discuss future strategies in the creation and implementation of secure and trustworthy deep learning systems tailored for real-world scenarios. Deep learning systems are progressively finding their place in diverse sectors of society, from data science and robotics to healthcare, economics, and safety-critical applications. Yet, a prevailing concern is that many of the current deep learning algorithms do not consistently ensure safety or robustness, especially when faced with unpredictable variables. For deep learning systems to be deemed reliable, they need to exhibit resilience against disturbances, attacks, failures, and inherent biases. Moreover, it's imperative that these systems offer assurances of circumventing situations that are unsafe or beyond recovery.
In this workshop, we welcome submissions spanning research papers, position papers, and works-in-progress across the entire spectrum of secure and trustworthy deep learning. Areas of interest include (but not limited to):
are not limited to:
- Robust Deep Learning Systems: Attacks against deep learning system and corresponding defenses.
- Privacy-Preserving Deep Learning Systems
- Privacy inference attacks against deep learning systems, e.g., membership inference, model extraction.
- Fairness and Equity in Decision Making and intersection among Fairness, Privacy, Robustness, Accountability, etc
- Machine Unlearning and Verification Mechanism
- Transparency and Interpretability of AI
- Trustworthy Deep Learning for Domain-Specific Applications, e.g., energy, medical, manufacturing
- Secure Federated Learning
- Security and privacy in on-device applications
- Detection of Anomalies and Distribution Shifts
- Issues Specific to Generative AI and Large Models, e.g., toxicity, information leakage, prompt injection
Paper submission site: https://sectl2024.hotcrp.com/.
Submitted papers must not substantially overlap papers that have been published or that are simultaneously submitted to a journal or a conference with proceedings. The review process is double-blinded. The submitted PDF version should be anonymized. Submissions must be in double-column ACM SIG Proceedings format (See here), and should not exceed 12 pages. Position papers describing the work in progress and papers describing novel testbeds are also welcome. Only pdf files will be accepted. Authors of accepted papers must guarantee that their papers will be presented at the workshop. At least one author of the paper must be registered at the appropriate conference rate. Accepted papers will be published in the ACM Digital Library. There will also be a best paper award.
- Kwok Yan Lam (Nanyang Technological University, Singapore)
- Shuo Wang (CSIRO's Data61, Australia)
- Hongsheng Hu (CSIRO's Data61, Australia)
Xiaoning (Maggie) Liu, RMIT
Wanyu Lin, PolyU
Vera Rimmer, KU Leuven
Yufei Chen, CityU
Shifeng Sun, Shanghai Jiao Tong University
Lingchen Zhao, Wuhan University
Yanjun Zhang, UTS
Guangyu Shen, Purdue University
Yansong Gao, CSIRO’s Data61
Nan Wu, CSIRO’s Data61
Bang Wu, CSIRO’s Data61
Shangqi Lai, CSIRO’s Data61
Ruoxi Sun, CSIRO’s Data61
Benjamin Zi Hao Zhao, Macquarie University
Zhi Zhang, UWA
Ziyao Liu, NTU
Gareth Tyson, HKUST
Huaming Chen, University of Sydney
Stjepan Picek, Radboud University
- Xingliang Yuan (Monash University, Australia)
- Minhui (Jason) Xue (CSIRO's Data61, Australia)