The 2nd ACM Workshop on Secure and Trustworthy Deep Learning Systems
(SecTL 2024)

held in conjunction with ACM AsiaCCS'24
Singapore, 2 July 2024

acm

Dates | CFP | Organizers | Keynotes | Program | Registration | Venue | Contact

Important Dates

1st Round Submissions Due:
Notification:
Camera-ready Due:
8 December 2023 (23:59 AoE)
20 January 2024
16 May 2024
2nd Round Submissions Due:
Notification:
Camera-ready Due:
18 March 2024 (23:59 AoE)
26 April 2024
16 May 2024
Workshop Date: 2 July 2024

News

25 Oct 2023: The submission link is up now and ready to accept submissions.
September 2023: The workshop website is up!
Accepted Papers:

  • Explainability versus Security: The Unintended Consequences of xAI in Cybersecurity. Marek Pawlicki (Bydgoszcz University of Science and Technology), Aleksandra Pawlicka (University of Warsaw), Rafał Kozik (Bydgoszcz University of Science and Technology), Michał Choraś (Bydgoszcz University of Science and Technology)
  • Identifying and Generating Edge Cases. Niklas Bunzel (Fraunhofer SIT | ATHENE | Tu-Darmstadt), Nicolas Göller (Tu-Darmstadt), Raphael Antonius Frick (Fraunhofer SIT | ATHENE)
  • Utilizing Large Langue Models with Human Feedback Integration for Generating Dedicated Warning for Phishing Emails. Quan Hong Nguyen (Monash University), Tingmin Wu (CSIRO's Data61), Van Nguyen (Monash University), Carsten Rudolph (Monash University), Xingliang YUAN (Monash University), Jason Xue (CSIRO's Data61)
  • SmartGenerator4UI: A Web Interface Element Recognition and HTML Generation System Based on Deep Learning and Image Processing. Yanhui Liang (South-Central Minzu University)
  • Towards Evaluating the Robustness of Automatic Speech Recognition Systems via Audio Style Transfer. Weifei Jin (Beijing University of Posts and Telecommunications), Yuxin Cao (Tsinghua University), Junjie Su (Beijing University of Posts and Telecommunications), Qi Shen (Beijing University of Posts and Telecommunications), Kai Ye (Tsinghua University), Derui Wang (CSIRO’s Data61), Jie Hao (Beijing University of Posts and Telecommunications), Ziyao Liu (Nanyang Technological University)
  • MEGEX: Data-Free Model Extraction Attack Against Gradient-Based Explainable AI. Takayuki Miura (NTT Social Informatics Laboratories), Toshiki Shibahara (NTT), Naoto Yanai (Osaka University)
  • Signals Are All You Need: Detecting and Mitigating Digital and Real-World Adversarial Patches Using Signal-Based Features. Niklas Bunzel (Fraunhofer SIT | ATHENE | Tu-Darmstadt), Raphael Antonius Frick (Fraunhofer SIT | ATHENE), Gerrit Klause (Fraunhofer SIT | ATHENE | Tu-Darmstadt), Aino Schwarte (Tu-Darmstadt), Jonas Honermann (Tu-Darmstadt)

Call for Papers

The AsiaCCS 2024 Workshop on Secure and Trustworthy Deep Learning Systems (SecTL) is curated as a premier platform for researchers, engineers, and professionals from academia, government, and industry. Its purpose is to facilitate the exchange of ideas, showcase emerging research, and discuss future strategies in the creation and implementation of secure and trustworthy deep learning systems tailored for real-world scenarios. Deep learning systems are progressively finding their place in diverse sectors of society, from data science and robotics to healthcare, economics, and safety-critical applications. Yet, a prevailing concern is that many of the current deep learning algorithms do not consistently ensure safety or robustness, especially when faced with unpredictable variables. For deep learning systems to be deemed reliable, they need to exhibit resilience against disturbances, attacks, failures, and inherent biases. Moreover, it's imperative that these systems offer assurances of circumventing situations that are unsafe or beyond recovery. In this workshop, we welcome submissions spanning research papers, position papers, and works-in-progress across the entire spectrum of secure and trustworthy deep learning. Areas of interest include (but not limited to): are not limited to:

  • Robust Deep Learning Systems: Attacks against deep learning system and corresponding defenses.
  • Privacy-Preserving Deep Learning Systems
  • Privacy inference attacks against deep learning systems, e.g., membership inference, model extraction.
  • Fairness and Equity in Decision Making and intersection among Fairness, Privacy, Robustness, Accountability, etc
  • Machine Unlearning and Verification Mechanism
  • Transparency and Interpretability of AI
  • Trustworthy Deep Learning for Domain-Specific Applications, e.g., energy, medical, manufacturing
  • Secure Federated Learning
  • Security and privacy in on-device applications
  • Detection of Anomalies and Distribution Shifts
  • Issues Specific to Generative AI and Large Models, e.g., toxicity, information leakage, prompt injection

Submission Instructions

Paper submission site: https://sectl2024.hotcrp.com/.

Submitted papers must not substantially overlap papers that have been published or that are simultaneously submitted to a journal or a conference with proceedings. The review process is double-blinded. The submitted PDF version should be anonymized. Submissions must be in double-column ACM SIG Proceedings format (See here), and should not exceed 12 pages. Position papers describing the work in progress and papers describing novel testbeds are also welcome. Only pdf files will be accepted. Authors of accepted papers must guarantee that their papers will be presented at the workshop. At least one author of the paper must be registered at the appropriate conference rate. Accepted papers will be published in the ACM Digital Library. There will also be a best paper award.


Organizers

Program Chairs
- Kwok Yan Lam (Nanyang Technological University, Singapore)
- Shuo Wang (CSIRO's Data61, Australia)

Web Chair
- Hongsheng Hu (CSIRO's Data61, Australia)

Program Committee
Xiaoning (Maggie) Liu, RMIT
Wanyu Lin, PolyU
Vera Rimmer, KU Leuven
Yufei Chen, CityU
Shifeng Sun, Shanghai Jiao Tong University
Lingchen Zhao, Wuhan University
Yanjun Zhang, UTS
Guangyu Shen, Purdue University
Yansong Gao, CSIRO’s Data61
Nan Wu, CSIRO’s Data61
Bang Wu, CSIRO’s Data61
Shangqi Lai, CSIRO’s Data61
Ruoxi Sun, CSIRO’s Data61
Benjamin Zi Hao Zhao, Macquarie University
Zhi Zhang, UWA
Ziyao Liu, NTU
Gareth Tyson, HKUST
Huaming Chen, University of Sydney
Stjepan Picek, Radboud University


Steering Committee
- Xingliang Yuan (Monash University, Australia)
- Minhui (Jason) Xue (CSIRO's Data61, Australia)

Keynotes

TBD


Program

TBD


Contact

Email: Shuo Wang



Updated: September 26, 2023