Adversarial attacks can improve the reliability of neural networks (NN) in predicting molecular energies by quantifying their uncertainty, according to a new report by Massachusetts Institute of Technology (MIT) researchers. The team used adversarial attacks to sample molecular geometries in a potential energy surface (PES) and tapped multiple NNs to forecast the PES from the same data.
"We aspire to have a model that is perfect in the regions we care about [i.e., the ones that the simulation will visit] without having had to run the full ML [machine learning] simulation, by making sure that we make it very good in high-likelihood regions where it isn't," said MIT's Rafael Gomez-Bombarelli.
From MIT News
View Full Article
Abstracts Copyright © 2021 SmithBucklin, Washington, DC, USA