Adversarial Machine Learning in Real-World Computer Vision Systems

Date: June, 16,2019

Location: Long Beach, CA, USA (co-located with CVPR 2019)

Abstract—As computer vision models are being increasingly deployed in the real world, including applications that require safety considerations such as self-driving cars, it is imperative that these models are robust and secure even when subject to adversarial inputs.

This workshop will focus on recent research and future directions for security problems in real-world machine learning and computer vision systems. We aim to bring together experts from the computer vision, security, and robust learning communities in an attempt to highlight recent work in this area as well as to clarify the foundations of secure machine learning. We seek to come to a consensus on a rigorously framework to formulate adversarial machine learning problems in computer vision, characterize the properties that ensure the security of perceptual models, and evaluate the consequences under various adversarial models. Finally, we hope to chart out important directions for future work and cross-community collaborations, including computer vision, machine learning, security, and multimedia communities.

Schedule

The following is a tentative schedule and is subject to change prior to the workshop.

8:40am Opening Remarks
Session 1: Robust Perception, Imitation, and Control
9:00am Invited Talk #1: Yisong Yue. Two Vignettes in Robust Detection and Adversarial Analysis for Control
9:30am Invited Talk #2: Hao Su. Towards Attack-Agnostic Defense for 2D and 3D Recognition
10:00am Contributed Talk #1: NATTACK: Learning the Distributions of Adversarial Examples for an Improved Black-Box Attack on Deep Neural Networks
10:15am Coffee Break
Session 2: Improve Model Robustness Against Adversarial Attacks
10:30am Invited Talk #3: Alexander Schwing. Robust GAN Training to Capture Priors When Learning to Anticipate
11:00am Contributed Talk #2: Learning Transferable Adversarial Examples via Ghost Networks
11:15am Poster Session #1 followed by break
12:00pm Lunch
Session 3: Vulnerabilities and Robustness of Machine Learning Models
1:15pm Invited Talk #4: Song Han. Defensive Quantization: When Efficiency Meets Robustness
1:45pm Invited Talk #5: Sergey Levine. Robust Perception, Imitation, and Reinforcement Learning for Embodied Learning Machines
2:15pm Contributed Talk #3: Big but Imperceptible Adversarial Perturbations via Semantic Manipulation
2:30pm Coffee Break
Session 4: Adversarial Machine Learning in Autonomous Driving
2:45pm Invited Talk #6: Trevor Darrell. Explainable AI for VQA and Driving
3:15pm Invited Talk #7: Bolei Zhou. Image Manipulation from Adversarial Samples to GANs
3:45pm Contributed Talk #4: Attacking Multiple Object Tracking using Adversarial Examples
4:00pm Contributed Talk #5: Adversarial Objects Against LiDAR-Based Autonomous Driving Systems
4:15pm Poster Session #2

Schedule

Poster Session #1 (11:15am-12:00pm)

Poster Session #2 (4:15pm-5:00pm)

Poster Position

Pacific Ball Room.

Poster Size

CVPR workshop. The physical dimensions of the poster stands that will be available this year are 8 feet wide by 4 feet high. Please review the CVPR18 poster template for more details on how to prepare your poster.

Organizing Committee

Bo Li

Li Erran Li

David forsyth

Dawn Song

Ramin zabih

Chaowei Xiao

Program Committee

  • Hadi Abdullah (UF-FICS)
  • Yunhan Jia (Baidu X-Lab)
  • Yulong Cao (University of Michigan)
  • Octavian Suciu (University of Maryland)
  • Qi Alfred Chen (University of California, Irvine)
  • Cihang Xie ( Johns Hopkins University)
  • Yigitcan Kaya (University of Maryland)
  • Edward Zhong (Baidu USA)
  • Matthew Wicker (University of Georgia)
  • Linyi Li (University of Illinois at Urbana-Champaign)
  • Yizheng Chen (Columbia Univeristy)
  • Zhuolin Yang (Shanghai Jiao Tong Univerisity)
  • Sixie Yu (Washington University in St. Louis)
  • Min Jin Chong (University of Illinois at Urbana-Champaign)
  • Eric Wong (Carnegie Mellon University)
  • Yuxin Wu (FAIR)
  • Warren He (University of California, Berkeley)
  • Xinchen Yan (University of Michigan, Ann Arbor)
  • Mantas Mazeika (University of Chicago)
  • Kimin Lee (Korea Advanced Institute of Science and Technology)
  • Shreya Shankar (Stanford University)
  • Xinyun Chen (University of California, Berkeley)
  • Kaizhao Liang (University of Illinois, Urbana Champaign)
  • Fartash Faghri (University of Toronto)
  • Anand Bhattad (University of Illinois at Urbana-Champaign)
  • Yunseok Jang (University of Michigan)
  • Xiaowei Huang (University of Liverpool)
  • Karl Ni (Google LLC)
  • Kathrin Grosse (CISPA, saarland university)
  • Chao Yan (Vanderbilt University)
  • Dawei Yang (University of Michigan)
  • Pin-Yu Chen (IBM)
  • Important Dates

    Call For Papers

    Submission deadline: May 20, 2019 Anywhere on Earth (AoE)

    Notification sent to authors: June 7, 2019 Anywhere on Earth (AoE)

    Submission server: https://easychair.org/cfp/AdvMLCV2019

    The workshop will include contributed papers. Based on the PC’s recommendation, each paper accepted to the workshop will be allocated either a contributed talk or poster presentation .

    Submissions need to be anonymized. The workshop allows submissions of papers that are under review or have been recently published in a conference or a journal. The workshop will not have any official proceedings.

    We invite submissions on any aspect of machine learning that relates to computer security (and vice versa). This includes, but is not limited to: