Adversarial Machine Learning in Real-World Computer Vision Systems

Date: June, 16,2019

Location: Long Beach, CA, USA (co-located with CVPR 2019)

Abstract—As computer vision models are being increasingly deployed in the real world, including applications that require safety considerations such as self-driving cars, it is imperative that these models are robust and secure even when subject to adversarial inputs.

This workshop will focus on recent research and future directions for security problems in real-world machine learning and computer vision systems. We aim to bring together experts from the computer vision, security, and robust learning communities in an attempt to highlight recent work in this area as well as to clarify the foundations of secure machine learning. We seek to come to a consensus on a rigorously framework to formulate adversarial machine learning problems in computer vision, characterize the properties that ensure the security of perceptual models, and evaluate the consequences under various adversarial models. Finally, we hope to chart out important directions for future work and cross-community collaborations, including computer vision, machine learning, security, and multimedia communities.


The following is a tentative schedule and is subject to change prior to the workshop.

8:00am Opening Remarks
Session 1:Interpretable Machine Learning Models
9:00am Invited Talk #1:
9:30am Contributed Talk #1:
10:00am Poster Spotlights #1:
10:00am Coffee Break
Session 2:Adversarial Examples in Physical World
10:30am Invited Talk #2:
11:00am Contributed Talk #2:
11:15am Invited Talk #3:
11:45am Poster Spotlights #2:
12:00pm Lunch
Session 3:Improve Model Robustness Against Adversarial Examples
1:15pm Invited Talk #4:
1:45pm Contributed Talk #3:
2:00pm Poster Session followed by break
Session 4: Adversarial Machine Learning in Autonomous Driving
2:45pm Invited Talk #5:
3:15pm Contributed Talk #4:
3:30pm Invited Talk #6:
4:00pm Contributed Talk #5:

Important Dates

Call For Papers

Submission deadline: May 10, 2019 Anywhere on Earth (AoE)

Notification sent to authors: June 1, 2019 Anywhere on Earth (AoE)

Submission server:

The workshop will include contributed papers. Based on the PC’s recommendation, each paper accepted to the workshop will be allocated either a contributed talk or poster presentation .

We invite submissions on any aspect of machine learning that relates to computer security (and vice versa). This includes, but is not limited to:

Organizing Committee

Bo Li

Li Erran Li

Gerald Friedland

David forsyth

Dawn Song

Ramin zabih

Chaowei Xiao

Program Committee

  • Bhavya Khailkhura (Lawrence Livermore National Lab)
  • Catherine Olsson (Google Brain)
  • Chaowei Xiao (University of Michigan)
  • David Evans (University of Virginia)
  • Dimitris Tsipras (Massachusetts Institute of Technology)
  • Earlence Fernandes (University of Washington)
  • Eric Wong (Carnegie Mellon University)
  • Fartash Faghri (University of Toronto)
  • Florian Tramer (Stanford University)
  • Hadi Abdullah (University of Florida)
  • Hao Su (UCSD)
  • Jonathan Uesato (DeepMind)
  • Karl Ni (In-Q-Tel)
  • Kassem Fawaz (University of Wisconsin-Madison)
  • Kathrin Grosse (CISPA)
  • Krishna Gummadi (MPI-SWS)
  • Matthew Wicker (University of Georgia)
  • Nathan Mundhenk (Lawrence Livermore National Lab)
  • Nicholas Carlini (Google Brain)
  • Nicolas Papernot (Google Brain and University of Toronto)
  • Octavian Suciu (University of Maryland)
  • Pin-Yu Chen (IBM)
  • Pushmeet Kohli (DeepMind)
  • Qian Chen (Tencent)
  • Qi Alfred Chen (UC Irvine)
  • Shreya Shankar (Stanford University)
  • Suman Jana (Columbia University)
  • Varun Chandrasekaran (University of Wisconsin-Madison)
  • Xiaowei Huang (Liverpool University)
  • Yanjun Qi (University of Virginia)
  • Yigitcan Kaya (University of Maryland)
  • Yizheng Chen (Georgia Tech)