News
Computing Profession News

The State of Fakery

How digital media could be authenticated, from computational, legal, and ethical points of view.
Posted
  1. Article
  2. Author
dog image created by GAN algorithm
An image of a dog created by a deep convolutional generative adversarial network (GAN) algorithm.

Back in 1999, Hany Farid was finishing his postdoctoral work at the Massachusetts Institute of Technology (MIT) and was in a library when he stumbled on a book called The Federal Rules of Evidence. The book caught his eye, and Farid opened to a random page, on which was a section entitled “Introducing Photos into a Court of Law as Evidence.” Since he was interested in photography, Farid wondered what those rules were.

While Farid was not surprised to learn that a 35mm negative is considered admissible as evidence, he was surprised when he read that then-new digital media would be treated the same way. “To put a 35mm file and a digital file on equal footing with respect to reliability of evidence seemed problematic to me,” says Farid, now a computer science professor at Dartmouth College and a leading expert on digital forensics. “Anyone could see where the trends were going.”


Images, voice recordings, video, and other forms of media can be changed through the use of software, so we need better ways to spot deception.


That led Farid on a two-decades-long journey to consider about how digital media could be authenticated from the computational, legal, and ethical points of view. He and others have their work cut out for them; there are dozens of ways a digital image can be manipulated, from doing something as simple as cropping or lightening it, to something more nefarious.

As fake news dominates headlines and the use of artificial intelligence (AI) to alter images, video, or photographs is rampant, media outlets, political campaigns, ecommerce sites, and even legal proceedings are being called into question for the work they generate. This has led to various efforts in government, academia, and technological realms to help identify such fakery.

“More and more, we’re living in a digital world where that underlying digital media can be manipulated and altered, and the ability to authenticate is incredibly important,” says Farid. Videos can go viral in a matter of minutes; coupled with the pace of technological advance and the ability to easily deceive someone online, how is it possible to trust what we’re seeing coming out in the world?

uf1.jpg
Figure. A real dog (left), and an image of a dog created by a deep convolutional generative adversarial network (GAN) algorithm.

It is a troubling question, and no one, it seems, is immune from being targeted. Virgin Group founder Sir Richard Branson revealed he has been the target of scams using his image to impersonate him. In a blog post in January 2017, Branson noted that “the platforms where the fake stories are spreading need to take responsibility … and do more to prevent this dangerous practice.”

People need to be skeptical of what they see and hear, says David Schubmehl, research director, Cognitive/AI Systems and Content Analytics at market research firm IDC.

“People have to get used to the idea that images, voice recordings, video, and any other forms of media that can be represented as digital data can be manipulated and changed in one or more ways through the use of software,” Schubmehl says.

Because machine learning is becoming so prevalent, experts say we need ways to improve its ability to spot deception. For example, Schubmeil says, “Today, researchers are experimenting with generative adversarial networks (GANs) that can be used to combine two different types of images or video together to create a merged third type of video.”

The idea behind GANs is to have two neural networks, one that acts as a “discriminator” and the other a “generator,” which compete against each other to build the best algorithm for solving a problem. The generator network uses feedback it receives from the discriminator to learn to produce convincing images that can’t be distinguished from real images; this helps it get better at detecting something false.

Of course, image altering has its (legal) benefits. It has allowed the movie industry to produce spectacular action movies, since the vast majority of that action is computer-generated content, Farid points out. On the consumer side, image altering lets people create aesthetically pleasing photos, so everyone looks good in the same photo.

Schubmehl agrees. “Hollywood has been creating fake worlds for people for decades and now regular people can do it as well,” he says. “There’s a tremendous use for tools like Photoshop to create exactly the right type of image that someone wants for whatever purposes.”

Whether manipulated images should be identified as such is the subject of much debate. While it would seem to be a no-brainer when it comes to their use in legal cases and photo-journalism, there are mixed opinions about the use of manipulated images in industries such as fashion, entertainment, and advertising. France recently passed a law stipulating that any image showing a model whose appearance has been altered must feature a clear and prominent disclaimer label to indicate this is the case, notes Sophie Nightingale, a postdoctoral teaching associate at Royal Holloway University of London. Those who do not comply with the law are subject to a fine of 30% of the advertising cost.


France enacted legislation requiring images showing models whose appearances have been altered to be clearly, prominently labeled to that effect.


“Perhaps not too surprisingly, advertisers and publishers continue to resist such legislation and criticize the limitations it places on free expression and artistic freedom,” says Nightingale, who completed image manipulation studies as part of her Ph.D. at the University of Warwick in Coventry, U.K. She adds that “many photographers believe that the use of image manipulation techniques is a positive thing that allows creative freedom; in fact, some suggest the ability to manipulate images makes a photographer more akin to a painter who takes something that is real and puts their own artistic spin on it.”

One tricky thing is that “manipulated” is not an easy word to define, observes Farid, and there are a lot of gray areas. He advocates that publishing and media outlets, as well as courts of law and scientific journals, should adhere to a policy of “you don’t have to tag the images, you simply have to show me the original.” This, he says, “keeps people honest.” That approach “allows we, the consumers, to make the determination, and bypasses the complexity of defining what’s appropriate and what isn’t,” Farid says.

Right now, we are unable to know for sure when a photo has been altered when sophisticated manipulations are being used, says Nightingale. She adds, however, that there are signs of image manipulation that can be identified.

“Computer scientists working in digital forensics and image analysis have developed a suite of programs that detect inconsistencies in the image, perhaps in the lighting,” she says. Her work includes conducting research to see whether people can make use of these types of inconsistencies to help identify forgeries.

Other more general tips Nightingale suggests for spotting fake photos include using reverse image searches to find the image source, looking for repeating patterns in the image (since repetition might be a sign that something has been cloned) and checking the metadata, which provides details such as the date and time the photo was taken, camera settings used, and location coordinates.

uf2.jpg
Figure. Architecture of a generative adversarial network.

While digital forensic techniques are a promising way to check the authenticity of photos, for now “using these techniques requires an expert and can be time-consuming,” Nightingale says. “What’s more, they don’t 100% guarantee that a photo is real or fake. That said, digital forensic techniques and our work, which is trying to improve people’s ability to spot fake images, does at least make it more difficult for forgers to fool people.”

Farid has developed several techniques for determining whether an image has been manipulated. One method looks at whether a JPEG image has been compressed more than once. Another technique detects image cloning, which is done when trying to remove something from an image, he says. In addition, Schubmehl cites the development of machine learning algorithms by researchers at New York University to spot counter-feit items.

The mission of a five-year U.S. Defense Advanced Research Projects Agency (DARPA) program called Medi-For (media forensics) is to use digital forensics techniques to build an automated system that can accurately analyze hundreds of thousands of images a day, says Farid, who is participating in the program. “We’re now in the early days of figuring out how to scale [the system] so we can do things quickly and accurately to stop the spread of viral content that is fake or has been manipulated,” he says. “The stakes can be very, very high, and that’s something we have to worry a great deal about.”

That is because a growing number of AI tools are increasing the ability for fakery to flourish, regardless of how they are being used. In 2016, Adobe announced VoCo (voice conversion), essentially a “Photoshop of speech” tool that lets a user edit recorded speech to replicate and alter voices.

Face2Face is an AI-powered tool that can do real-time video reenactment. The technology lets a user “animate the facial expressions of the target video by a source actor and re-render the manipulated output video in a photorealistic fashion,” according to its creators at the University of Erlangen-Nuremberg, the Max Planck Institute for Informatics, and Stanford University. When someone moves their mouth and makes facial expressions, those movements and expressions will be tracked and then translated onto someone else’s face, making it appear that the target person is making those exact movements.

On the flip side is software that helps users take preventative measures against being duped. One is an AI tool called Scarlett that was recently introduced by adult dating site SaucyDates, with the goal of reducing fraud and scams in the dating industry. Scarlett acts as a virtual assistant and as people are having live conversations, it scores users; when the score reaches a threshold, it is flagged and read by a moderator. To protect the privacy of the conversation, the moderator can only read the suspected fraudster’s messages, explains David Minns, founder and CEO of software developer DM Cubed, which developed the SaucyDates tool. He adds the AI tool also warns the potential victim of fraudulent content.

Farid says we should absolutely be alarmed by the growth of software that enables digital media to be manipulated into fakes for nefarious purposes.

“There’s no question that from the field of computer vision to computational photography to computer graphics to software that is commercially available, we can continue to be able to manipulate digital content in ways that were un-imageable a few years ago,” he says. “And that trend is not going away.”

*  Further Reading

Physical Unclonable Functions and Applications: A Tutorial. Proceedings of the IEEE, Volume: 102, Issue: 8, Aug. 2014

Pappu, S.R.
Physical One-Way Functions. Pappu, Ravikanth. 2001. Massachusetts Institute of Technology, http://cba.mit.edu/docs/theses/01.03.pappuphd.powf.pdf

Thies, J., Zollhofer, M., Samminger, M., Theobalt, C., and Niessner, M.
Face2Face: Real-time Face Capture and Reenactment of RGB Videos, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) http://www.graphics.stanford.edu/~niessner/papers/2016/1facetoface/thies2016face.pdf

Denton, E., Gross, S., and Fergus, R.
Semi-Supervised Learning with Context-Conditional Generative Adversarial Networks. 2016 https://arxiv.org/abs/1611.06430

Sencar, H. T., and Memon, N.
Digital Image Forensics: There is More to a Picture than Meets the Eye. ISBN-13: 978-1461407560 https://www.amazon.com/Digital-Image-Forensics-There-Picture/dp/1461407567/ref=reg_hu-rd_add_1_dp

The National Academy of Science. Strengthening Forensic Science in the United State: A Path Forward. 2009. The National Academies Press. https://www.ncjrs.gov/pdffiles1/nij/grants/228091.pdf

Back to Top

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More