Harvard University and Massachusetts Institute of Technology (MIT) researchers warn in a recently published study that new artificial intelligence (AI) technology designed to enhance healthcare is vulnerable to misuse, with "adversarial attacks" that can deceive the system into making misdiagnoses being one example.
A more likely scenario is of doctors, hospitals, and other organizations manipulating the AI in billing or insurance software in an attempt to maximize revenue.
The researchers said software developers and regulators must consider such possibilities as they build and evaluate AI technologies in the years to come.
MIT's Samuel Finlayson said, "The inherent ambiguity in medical information, coupled with often-competing financial incentives, allows for high-stakes decisions to swing on very subtle bits of information."
Changes doctors make to medical scans or other patient data in an effort to satisfy the AI used by insurance firms also could wind up in a patient's permanent record.
From The New York Times
View Full Article - May Require Paid Subscription
Abstracts Copyright © 2019 SmithBucklin, Washington, DC, USA