Home → News → Researcher: 'we Should Be Worried' This Computer Thought... → Full Text

Researcher: 'we Should Be Worried' This Computer Thought a Turtle Was a Gun

By New Scientist

November 6, 2017

[article image]


The Massachusetts Institute of Technology's LabSix artificial intelligence (AI) research team has demonstrated the first example of a real-world three-dimensional (3D) object becoming "adversarial" at any angle.

Deceiving an AI with a few pixels is known as an adversarial example, and the LabSix researchers successfully tricked an AI into believing a real-world object--in this case, a 3D printed turtle--was a firearm.

The group spent six weeks developing an algorithm that confuses a neural network no matter how the AI looks at it, and with a few small changes to the turtle's coloring, they made the computer think it was looking at a rifle regardless of the viewing angle.

"More and more real-world systems are going to start using these technologies," notes LabSix member Anish Athalye. "We need to understand what's going on with them, understand their failure modes, and make them robust against any kinds of attack."

From New Scientist
View Full Article

 

Abstracts Copyright © 2017 Information Inc., Bethesda, Maryland, USA

0 Comments

No entries found