On the Limitation of MagNet Defense Against L1-Based Adversarial Examples

Pei Hsuan Lu, Pin Yu Chen, Kang Cheng Chen, Chia Mu Yu

Research output: Chapter in Book/Report/Conference proceedingConference contribution

3 Scopus citations

Abstract

In recent years, defending adversarial perturbations to natural examples in order to build robust machine learning models trained by deep neural networks (DNNs) has become an emerging research field in the conjunction of deep learning and security. In particular, MagNet consisting of an adversary detector and a data reformer is by far one of the strongest defenses in the black-box oblivious attack setting, where the attacker aims to craft transferable adversarial examples from an undefended DNN model to bypass an unknown defense module deployed on the same DNN model. Under this setting, MagNet can successfully defend a variety of attacks in DNNs, including the high-confidence adversarial examples generated by the Carlini and Wagner's attack based on the L2 distortion metric. However, in this paper, under the same attack setting we show that adversarial examples crafted based on the L1 distortion metric can easily bypass MagNet and mislead the target DNN image classifiers on MNIST and CIFAR-10. We also provide explanations on why the considered approach can yield adversarial examples with superior attack performance and conduct extensive experiments on variants of MagNet to verify its lack of robustness to L1 distortion based attacks. Notably, our results substantially weaken the assumption of effective threat models on MagNet that require knowing the deployed defense technique when attacking DNNs (i.e., the gray-box attack setting).

Original languageEnglish
Title of host publicationProceedings - 48th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops, DSN-W 2018
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages200-214
Number of pages15
ISBN (Electronic)9781538655955
DOIs
StatePublished - 19 Jul 2018
Event48th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops, DSN-W 2018 - Luxembourg City, Luxembourg
Duration: 25 Jun 201828 Jun 2018

Publication series

NameProceedings - 48th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops, DSN-W 2018

Conference

Conference48th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops, DSN-W 2018
CountryLuxembourg
CityLuxembourg City
Period25/06/1828/06/18

Keywords

  • adversarial example
  • neural network

Fingerprint Dive into the research topics of 'On the Limitation of MagNet Defense Against L<sub>1</sub>-Based Adversarial Examples'. Together they form a unique fingerprint.

  • Cite this

    Lu, P. H., Chen, P. Y., Chen, K. C., & Yu, C. M. (2018). On the Limitation of MagNet Defense Against L1-Based Adversarial Examples. In Proceedings - 48th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops, DSN-W 2018 (pp. 200-214). [8416250] (Proceedings - 48th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops, DSN-W 2018). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/DSN-W.2018.00065