Although computer vision models have achieved advanced performance on various recognition tasks in recent years, they are known to be vulnerable against adversarial examples. The existence of adversarial examples reveals that current computer vision models perform differently with the human vision system, and on the other hand provides opportunities for understanding and improving these models.
In this workshop, we will focus on recent research and future directions on adversarial machine learning in computer vision. We aim to bring experts from the computer vision, machine learning and security communities together to highlight the recent progress in this area, as well as discuss the benefits of integrating recent progress in adversarial machine learning into general computer vision tasks. Specifically, we seek to study adversarial machine learning not only for enhancing the model robustness against adversarial attacks, but also as a guide to diagnose/explain the limitation of current computer vision models as well as potential improving strategies. We hope this workshop can shed light on bridging the gap between the human vision system and computer vision systems, and chart out cross-community collaborations, including computer vision, machine learning and security communities.
For more details please visit Adversarial Machine Learning in Computer Vision.