Carnegie Mellon University (CMU) researchers have developed a technique to inhibit the Amazon Alexa voice assistant's ability to respond to commands by playing an audio clip. The researchers used an artificial intelligence-generated clip on their own voice assistant, producing guitar sounds the assistant did not respond to—then tested the attack on Alexa.
Alexa responded to the audio cue "Alexa" only 11% of the time, versus 80% when other music was playing, and 93% with no audio clip. CMU's Juncheng Li said this attack could be used to trick, confuse, or distract people, but it should be relatively easy for Amazon to tweak Alexa to identify and evade such exploits.
Amazon said the attack presents little danger, as "it would require specific audio samples to be played at the same time as a user saying the wake word and would only increase times where the wake word is not detected."
From New Scientist
View Full Article
Abstracts Copyright © 2019 SmithBucklin, Washington, DC, USA