Researchers at Stanford University and Google have developed a neural network trained to transform aerial images into street maps and then back again, but were surprised to discover that details left out of the final product reappeared when they told the system to revert back to the original image.
CycleGan learned to "hide" information about a source image within the images it generated via a nearly imperceptible, high-frequency signal.
The trick ensures the generator can recover the original sample and thus satisfy the cyclic consistency requirement, while the generated image remains realistic.
The researchers found CycleGan created a way to replicate details in a map by picking up on subtle changes in color that the human eye cannot detect.
In essence, it did not learn how to create a copy of the map from scratch; it simply replicated the features of the original into the noise patterns of the other.
From Daily Mail (U.K.)
View Full Article
Abstracts Copyright © 2019 SmithBucklin, Washington, DC, USA