Home → News → Deepfake Audio Has a Tell → Full Text

Deepfake Audio Has a Tell

By Ars Technica

September 21, 2022

[article image]

Researchers at the University of Florida can detect audio deepfakes by measuring acoustic and fluid dynamic distinctions between organic and synthetic voice samples.

The researchers inverted techniques used to replicate the sounds a person makes to acoustically model their vocal tract, in order to approximate the speaker's tract during a segment of speech.

Using the process to analyze deepfaked audio samples, on the other hand, can result in model vocal tract shapes that do not appear in people.

"By estimating the anatomy responsible for creating the observed speech, it's possible to identify whether the audio was generated by a person or a computer," the researchers explain.

From Ars Technica
View Full Article


Abstracts Copyright © 2022 SmithBucklin, Washington, DC, USA


No entries found