Adversarial Machine Learning in Real-World Computer Vision Systems
Date: June, 16,2019
Location: Long Beach, CA, USA (co-located with CVPR 2019)
Abstract—As computer vision models are being increasingly deployed in the real world, including applications that require safety considerations such as self-driving cars, it is imperative that these models are robust and secure even when subject to adversarial inputs.
This workshop will focus on recent research and future directions for security problems in real-world machine learning and computer vision systems. We aim to bring together experts from the computer vision, security, and robust learning communities in an attempt to highlight recent work in this area as well as to clarify the foundations of secure machine learning. We seek to come to a consensus on a rigorously framework to formulate adversarial machine learning problems in computer vision, characterize the properties that ensure the security of perceptual models, and evaluate the consequences under various adversarial models. Finally, we hope to chart out important directions for future work and cross-community collaborations, including computer vision, machine learning, security, and multimedia communities.
Schedule
The following is a tentative schedule and is subject to change prior to the workshop.
Schedule
Poster Session #1 (11:15am-12:00pm)
- Yunhan Jia, Yantao Lu, Junjie Shen, Qi Alfred Chen, Zhenyu Zhong and Tao Wei. Attacking Multiple Object Tracking using Adversarial Examples (#272)
- Yingwei Li, Song Bai, Yuyin Zhou, Cihang Xie, Zhishuai Zhang and Alan Yuille. Learning Transferable Adversarial Examples via Ghost Networks (#273)
- Yulong Cao, Chaowei Xiao, Benjamin Cyr, Yimeng Zhou, Won Park, Sara Rampazzi, Qi Alfred Chen, Kevin Fu and Zhuoqing Mao. Adversarial Sensor Attack on LIDAR-based Perception in Autonomous Driving (#274)
- Yunhan Jia, Yantao Lu, Senem Velipasalar, Zhenyu Zhong and Tao Wei. Enhancing Cross-task Transferability of Adversarial Examples with Dispersion Reduction (#274)
- Kálmán Szentannai, Jalal Al-Afandi and András Horváth. MimosaNet: An Unrobust Neural Network Preventing Model Stealing (#274)
- Yang Zhang, Hassan Foroosh, Philip David and Boqing Gong. CAMOU: Learning A Vehicle Camouflage for Physical Adversarial Attacks on Object Detectors in the Wild (#275)
Poster Session #2 (4:15pm-5:00pm)
- Yandong Li, Lijun Li, Liqiang Wang, Tong Zhang and Boqing Gong. NATTACK: Learning the Distributions of Adversarial Examples for an Improved Black-Box Attack on Deep Neural Networks (#276)
- Anand Bhattad, Min Jin Chong, Kaizhao Liang, Bo Li and David Forsyth. Big but Imperceptible Adversarial Perturbations via Semantic Manipulation (#277)
- Thomas Brunner, Frederik Diehl and Alois Knoll. Copy and Paste: A Simple But Effective Initialization Method for Black-Box Adversarial Attacks (#278)
- Abinaya Kandasamy and Venkatesh Babu Radhakrishnan. Adversarial Frame (#279)
- Houpu Yao, Zhe Wang, Guangyu Nie, Yassine Mazboudi, Yezhou Yang and Yi Ren. Augmenting Model Robustness with Transformation-Invariant Attacks (#280)
- Modar Alfadly, Adel Bibi and Bernard Ghanem. Analytical Moment Regularizer for Gaussian Robust Networks (#281)
- Yulong Cao, Chaowei Xiao, Dawei Yang, Jin Fang, Ruigang Yang, MingyanLiu, Bo Li. Adversarial Objects Against LiDAR-Based Autonomous Driving Systems (#282)
Poster Position
Pacific Ball Room.
Poster Size
CVPR workshop. The physical dimensions of the poster stands that will be available this year are 8 feet wide by 4 feet high. Please review the CVPR18 poster template for more details on how to prepare your poster.
Organizing Committee
Bo Li
Li Erran Li
David forsyth
Dawn Song
Ramin zabih
Chaowei Xiao
Program Committee
Important Dates
- Workshop paper submission deadline: 5/20/2019
- Notification to authors: 6/07/2019
- Camera ready deadline: 6/12/2019
Call For Papers
Submission deadline: May 20, 2019 Anywhere on Earth (AoE)
Notification sent to authors: June 7, 2019 Anywhere on Earth (AoE)
Submission server: https://easychair.org/cfp/AdvMLCV2019
The workshop will include contributed papers. Based on the PC’s recommendation, each paper accepted to the workshop will be allocated either a contributed talk or poster presentation .
Submissions need to be anonymized. The workshop allows submissions of papers that are under review or have been recently published in a conference or a journal. The workshop will not have any official proceedings.
We invite submissions on any aspect of machine learning that relates to computer security (and vice versa). This includes, but is not limited to:
- Test-time (exploratory) attacks: e.g. adversarial examples for neural nets
- Training-time (causative) attacks: e.g. data poisoning
- Physical attacks/defenses
- Differential privacy
- Privacy preserving generative models
- Game theoretic analysis on machine learning models
- Manipulation of crowd-sourcing systems
- Sybil detection
- Exploitable bugs in ML systems
- Formal verification of ML systems
- Model stealing
- Misuse of AI and deep learning
- Interpretable machine learning