Adversarial Machine Learning in Real-World Computer Vision Systems
Date: June, 16,2019
Location: Long Beach, CA, USA (co-located with CVPR 2019)
Abstract—As computer vision models are being increasingly deployed in the real world, including applications that require safety considerations such as self-driving cars, it is imperative that these models are robust and secure even when subject to adversarial inputs.
This workshop will focus on recent research and future directions for security problems in real-world machine learning and computer vision systems. We aim to bring together experts from the computer vision, security, and robust learning communities in an attempt to highlight recent work in this area as well as to clarify the foundations of secure machine learning. We seek to come to a consensus on a rigorously framework to formulate adversarial machine learning problems in computer vision, characterize the properties that ensure the security of perceptual models, and evaluate the consequences under various adversarial models. Finally, we hope to chart out important directions for future work and cross-community collaborations, including computer vision, machine learning, security, and multimedia communities.
The following is a tentative schedule and is subject to change prior to the workshop.
- Workshop paper submission deadline: 5/10/2019
- Notification to authors: 6/01/2019
- Camera ready deadline: 6/12/2019
Call For Papers
Submission deadline: May 10, 2019 Anywhere on Earth (AoE)
Notification sent to authors: June 1, 2019 Anywhere on Earth (AoE)
Submission server: https://easychair.org/cfp/AdvMLCV2019
The workshop will include contributed papers. Based on the PC’s recommendation, each paper accepted to the workshop will be allocated either a contributed talk or poster presentation .
We invite submissions on any aspect of machine learning that relates to computer security (and vice versa). This includes, but is not limited to:
- Test-time (exploratory) attacks: e.g. adversarial examples for neural nets
- Training-time (causative) attacks: e.g. data poisoning
- Physical attacks/defenses
- Differential privacy
- Privacy preserving generative models
- Game theoretic analysis on machine learning models
- Manipulation of crowd-sourcing systems
- Sybil detection
- Exploitable bugs in ML systems
- Formal verification of ML systems
- Model stealing
- Misuse of AI and deep learning
- Interpretable machine learning
Li Erran Li