AI Security SIG
The mission of the AI Security Special Interest Group (SIG) is to advance the use and understanding of artificial intelligence (AI), with a particular emphasis on large language models (LLMs) for security and incident response.
In undertaking this mission, the AI Security SIG fosters collaboration among its members, and actively encourages the sharing of ideas, knowledge (via presentations, papers, tutorials, code, ...), and practical experiences. We believe in the power of collective intelligence to navigate the complex landscape of AI security, and are committed to making a positive and lasting impact in this important field.
More signal, less noise: finally, since the field of AI/LLMs is currently extremely dynamic, we want to use the SIG to filter out and share only topics which are relevant for IT security so that each SIG member can benefit from the collective “filtering” of relevant knowledge.
- AI for Defenders: We aim to explore the effective utilization of AI and LLMs for IT Security Defenders, Computer Security Incident Response Teams (CSIRTs) as well as researchers and educators. This includes sharing knowledge amongst the SIG members. It also includes sharing use-cases which do benefit from AI/LLMs and tools which help for the respective use-cases.
- AI for Attackers: We want to understand how adversaries currently are (and in the future might) use AI and LLMs so that we can develop techniques for detecting and mitigating such threats. This involves sharing knowledge on how adversaries cleverly repurpose or mis-use AI/LLMs as well as the security of AI systems themselves.
- Other related aspects of the intersection between AI and Cybersecurity, such as (but not limited to) questions about:
data leakage when querying models (offline LLMs vs. cloud approach); data leakage with training data
model training data tainting
bias in models
adversarial attacks against models
robustness of models
- Aaron Kaplan (Liaison)
- Jeffrey Carpenter (Liaison)
- Patrick Grau (Bosch CERT)