News
Artificial Intelligence and Machine Learning News

Using Makeup to Block Surveillance

Altering one's facial features with a special type of makeup can keep them from being recognized by artificial intelligence.
Posted
  1. Article
  2. Author
pop art portrait of a female

Anti-surveillance makeup, used by people who do not want to be identified to fool facial recognition systems, is bold and striking, not exactly the stuff of cloak and daggers. While experts’ opinions vary on the makeup’s effectiveness to avoid detection, they agree that its use is not yet widespread.

Anti-surveillance makeup relies heavily on machine learning and deep learning models to “break up the symmetry of a typical human face” with highly contrasted markings, says John Magee, an associate computer science professor at Clark University in Worcester, MA, who specializes in computer vision research. However, Magee adds that “If you go out [wearing] that makeup, you’re going to draw attention to yourself.”

The effectiveness of anti-surveillance makeup has been debated because of racial justice protesters who do not want to be tracked, Magee notes.

Nitzan Guetta, a Ph.D. candidate at Ben-Gurion University in Israel, was among a group of researchers who spent the past two years exploring “how deep learning-based face recognition systems can be fooled using reasonable and unnoticeable artifacts in a real-world setup.” The researchers conducted an adversarial machine learning attack using natural makeup that prevents a participant from being identified by facial recognition models, she says.

The researchers “chose to focus on a makeup attack since at that time it was not explored, especially in the physical domain, and since we identified it as a potential and unnoticeable means that can be used for achieving this goal” of evading identification, Guetta explains.

When the researchers compared an adversarial/anti-surveillance makeup algorithm with normal makeup that didn’t have the guidance of the attack algorithm, “the results showed that the normal makeup did not succeed in fooling the facial recognition models,” she says.

uf1.jpg
Figure. In a test using a two-camera system described in the paper “Dodging Attack Using Carefully Crafted Natural Makeup,” an individual wearing no makeup was identified correctly in 47.57% of frames shot. With random makeup, this dropped to 33.73%, and with an intentionally designed makeup scheme applied, the person was identified in just 1.22% of frames.

While Guetta says such adversarial makeup for males is easier to produce, she admits that its overall use “may not be common in the real world, since producing such makeup that has a natural-looking appearance is not trivial.”

The proposed method Guetta and her fellow researchers developed can be used to fool/dodge identification by facial recognition models, as well as making these models more robust, she says. “We are planning to make our code for producing the adversarial makeup available for the research community,” she says.

Yet others remain skeptical. For people who wish to avoid being recognized, the accuracy of facial recognition systems is never 100%, and for surveillance purposes, it is “one piece of a very complicated puzzle,” says Erik Learned-Miller, a computer science professor at University of Massachusetts, Amherst.

While wearing anti-surveillance makeup makes it more difficult to identify someone with facial recognition, other tools should be used used to track that person for complete accuracy, he says. “You’re making one tool less effective by using makeup, but you’re by no means stopping the [identification] process,” Learned-Miller says.

He is not convinced anti-surveillance makeup alone is an effective camouflage tactic.

“Let’s say I’m an evil mastermind who built a giant surveillance system that uses credit card information, cellphone information, face recognition. The more accurate each piece is, the higher the percentage I will know where you are,” Learned-Miller says. “When you take one tool out, you reduce the effectiveness of the whole system.”

For example, while the makeup may camouflage your face, the system might still be able to identify you from the appearance of your hair. If someone is trying to tell the difference between you and three other people, for example, your hair color may be enough to tell who you are, but not if someone is trying to pick you out of a group of 50 people, Learned-Miller says.

He also believes the use of anti-surveillance makeup is not that widespread, saying that “if you’re going to put makeup on your face it will draw attention to you in other ways” and make you stand out. “If you’re trying to be inconspicuous, then putting makeup on that makes you look like a mime artist won’t help.”

Learned-Miller also does not see anti-surveillance makeup “catching on” to conceal a person’s identity. Rather, he says, “I think it’s an interesting academic exercise to see if you can reduce the effectiveness of facial recognition algorithms.”

Magee is lukewarm on the effectiveness of anti-surveillance makeup as well, saying there are issues with facial detection and recognition. Detection requires determining where someone’s face is—or whether an image actually depicts a face, he says. Another issue is “recognition and discrimination between different faces and determining who someone is, so you may have two different goals in terms of hiding from these systems.”

Face detection is looking for an overarching pattern of what makes a face, he says, while anti-surveillance makeup is trying to do something very different from that pattern. Magee cites the example of CV (computer vision) Dazzle, created in 2010 by artist Adam Harvey, as an academic project to demonstrate that using bold makeup and hairstyles can obscure facial features from being recognized by facial recognition systems.


“If you’re trying to be inconspicuous, then putting on makeup that makes you look like a mime artist won’t help.”


Harvey says the name for the concept was inspired by a type of World War I naval camouflage called Dazzle, which used cubist-inspired designs to break the visual continuity of a battleship and conceal its orientation and size. Similarly, CV Dazzle uses “avant-garde hairstyling and makeup designs to break apart the continuity of a face.” Because facial recognition algorithms rely on key facial features like symmetry and tonal contours to identify spatial relationships, detection can be blocked by creating what Harvey calls an “anti-face.”

To use CV Dazzle today, Harvey says, the concept “would need to be redesigned to work with new algorithms.”

In describing the work on his website, Harvey wrote that “Newer forms of a CV Dazzle approach could target other algorithms, such as deep convolutional neural networks,” but that would require finding vulnerabilities in these algorithms. “Because computer vision is a probabilistic determination, finding the right look is about finding how to appear one step below the threshold of detection.”

Echoing Magee, Harvey says face detection is the first step in any automated facial recognition system, followed by blocking the detection, using CV Dazzle with hairstyling and makeup. If a face is not detected by an algorithm, it effectively blocks the subsequent recognition algorithms, he says. Harvey also points to a tool (https://dl.acm.org/doi/10.1145/2502081.2502121) that can suggest methods for facial camouflage, such as makeup.

He adds that he is no longer actively maintaining or developing CV Dazzle. Harvey has developed two new projects in light of the fact that artificial intelligence (AI) “has changed the calculus of what’s possible with anti-surveillance.”

One project Harvey now is involved with is Exposing.ai, an art and research project that aims to undercut information supply chains powering biometric recognition and surveillance technologies. The other project is VFRAME.io, a computer vision project geared at human rights research, for which Harvey developed custom surveillance tools.

Magee says he would be surprised if anyone is developing anti-surveillance makeup right now. However, he has seen t-shirts that are designed to thwart detection using patterns that break up what facial recognition systems have been trained to seek for identification.

All of these systems work by giving them training data with examples of what humans are and what they are not, Magee says. “If you can break their statistical model … that’s when an anti-surveillance approach works.”

Take a system that has been trained to detect pedestrians versus cars and motorcycles, for example. “You could be wearing a t-shirt that makes your body look more like a car or motorcycle,” he says. This approach “feels more modern [than] distinctive makeup that makes someone stand out in a crowd.”

Yet computer vision evolves very rapidly, and a system trained by showing it images of what a face is and is not will not be fooled if the training is constantly updated. “As soon as someone says, ‘I can break your system by putting on this makeup,’ you can feed those examples into a computer vision system and learn how to recognize them,” Magee says, “so to some extent … it can see through the camouflage.”

It comes down to what algorithms and training examples are being used, he says.

That’s what Learned-Miller sees as the overarching issue, too. Making facial recognition foolproof with something like anti-surveillance makeup is never going to happen because it’s too difficult, he says.

Rather, says Learned-Miller, it is more interesting—and more effective—to look at the kinds of techniques that would cause a computer to get confused.

Like Magee, Learned-Miller says “if everyone started wearing makeup, similar to everyone wearing masks during COVID, people are going to develop algorithms that will be good at getting around that problem. So I don’t see wearing makeup to confuse a computer algorithm as a long-term solution” to addressing the problem of using surveillance and facial recognition in inappropriate ways, like someone changing their appearance to rob a bank, for example.

Learned-Miller produced a benchmark for facial recognition called Labeled Faces in the Wild (http://vis-www.cs.umass.edu/lfw/), which tests and scores computer algorithms on how well they do in recognizing people’s faces.

His work has now shifted to studying fairness/bias and privacy in face recognition. Learned-Miller and others have proposed a new federal office to regulate facial recognition technologies, similar to the structure of the U.S. Food and Drug Administration.

He also serves on a Massachusetts state commission studying how facial recognition systems are used by law enforcement, which will make recommendations about legislation to address the use of the technology.

While anti-surveillance makeup can fool facial recognition systems sometimes, Learned-Miller sees bigger things at play. This type of makeup “is one little cog in a complex set of issues, and it doesn’t strike me as a very practical solution,” he says.

Learned-Miller expects there will be “an arms race between those who don’t want to be recognized and those who want to recognize them, and it’s hard to stop that arms race,” he says. “It’s easier to control what things are allowable by law.”

*  Further Reading

Guetta, N., Shabtai, A., Singh, I., Momiyama, S., and Elovici, Y.
Dodging Attack Using Carefully Crafted Natural Makeup, Ben-Gurion University of the Negev, 14 Sep 2021, https://bit.ly/3FUMhn5

Valenti, L.
Can Makeup Be an Anti-Surveillance Tool? Vogue, June 12, 2020, https://www.vogue.com/article/anti-surveillance-makeup-cv-dazzle-protest

Sojit Pejcha, C.
Anti-surveillance makeup could be the future of beauty, Document Journal, January 30, 2020, https://bit.ly/3pSdTmY

RoyChowdhury, A., Yu, X.; Sohn, K., Learned-Miller, E., and Chandraker, M.
Improving Face Recognition by Clustering Unlabeled Faces in the Wild, University of Massachusetts, Amherst. July 15, 2020, https://arxiv.org/pdf/2007.06995.pdf

CV Dazzle: https://ahprojects.com/cvdazzle/Labelled Faces in the Wild Home: http://vis-www.cs.umass.edu/lfw/

MakeUp Tutorial: How To Hide From Cameras, Jillian Mayer, May 30, 2013, https://www.youtube.com/watch?v=kGGnnp43uNM

The Next HOPE: CV Dazzle: Face Deception, Channel2600, December 31, 2013, https://binged.it/3FXclh8

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More