Can make surveillance invisible to you
They created a special pattern that, when overlaid on objects, could make these advanced detectors, such as security monitors, unable to correctly identify or locate objects, just like invisible.
This article examines the art and science of launching confrontational attacks on object detectors. Most work on real-world adversarial attacks focuses on classifiers, which assign overall labels to the entire image, rather than detectors that locate objects within the image. The detector works by considering thousands of “priors”(potential bounding boxes) with different positions, sizes and aspect ratios in the image. In order to fool the object detector, adversarial examples must fool every prior in the image, which is much more difficult than deceiving the individual output of the classifier.
In this work, we conducted a systematic study of the adversarial attacks of state-of-the-art object detection frameworks. Using a standard detection dataset, we train patterns that suppress objectivity scores produced by a range of commonly used detectors and detector sets. Our ultimate goal is to create a wearable “invisibility” cloak so that the detector cannot detect the wearer’s presence.
method
We load images from the COCO detection dataset and pass them to the detector. When a person is detected, a pattern is rendered on that person with random viewing angle, brightness and contrast distortions. A gradient descent algorithm is then used to find patterns that minimize each object’s prior “objectiveness score”(the confidence that the object exists).
Making an invisibility cloak: Confrontation attacks on object detectors in the real world
We conducted a systematic study of the adversarial attacks of the most advanced target detection frameworks. Using a standard detection dataset, we train patterns that suppress objectivity scores produced by a range of commonly used detectors and detector sets. Through extensive experiments, we benchmarked the effectiveness of adversarial trained patches in white-box and black-box settings, and quantified the transferability of attacks between datasets, object classes, and detector models. Finally, we conducted a detailed study of physical world attacks using printed posters and wearable clothing, and used different metrics to rigorously quantify the performance of such attacks.
If you want to learn more, you can click on the link below the video.
Thank you for watching this video. If you like it, please subscribe and like it. thank
Original paper:https://www.cs.umd.edu/~tomg/projects/invisible/
Thesis:https://arxiv.org/abs/1910.14667
Video: