Adversarial Examples on Object Recognition: A Comprehensive Survey

Bibliographic Details
Title: Adversarial Examples on Object Recognition: A Comprehensive Survey
Authors: Serban, Alex, Poll, Erik, Visser, Joost
Publication Year: 2020
Collection: Computer Science
Subject Terms: Computer Science - Computer Vision and Pattern Recognition, Computer Science - Machine Learning
More Details: Deep neural networks are at the forefront of machine learning research. However, despite achieving impressive performance on complex tasks, they can be very sensitive: Small perturbations of inputs can be sufficient to induce incorrect behavior. Such perturbations, called adversarial examples, are intentionally designed to test the network's sensitivity to distribution drifts. Given their surprisingly small size, a wide body of literature conjectures on their existence and how this phenomenon can be mitigated. In this article we discuss the impact of adversarial examples on security, safety, and robustness of neural networks. We start by introducing the hypotheses behind their existence, the methods used to construct or protect against them, and the capacity to transfer adversarial examples between different machine learning models. Altogether, the goal is to provide a comprehensive and self-contained survey of this growing field of research.
Comment: Published in ACM CSUR. arXiv admin note: text overlap with arXiv:1810.01185
Document Type: Working Paper
Access URL: http://arxiv.org/abs/2008.04094
Accession Number: edsarx.2008.04094
Database: arXiv
More Details
Description not available.