Researchers at the Massachusetts Institute of Technology have developed a system to gather more information from images used to train machine-learning models, including those that can analyze medical scans to help diagnose and treat brain conditions. The new system uses a single labeled scan, along with unlabeled scans, to automatically synthesize a massive dataset of distinct training examples. The dataset can be used to better train machine learning models to find anatomical structures in new scans.
The system uses a convolutional neural network to automatically generate data for the "image segmentation" process, which divides an image into regions of pixels that are more meaningful and easier to analyze. The network analyzes unlabeled scans from different patients and different equipment to "learn" anatomical, brightness, and contrast variations. Then, it applies a random combination of those learned variations to a single labeled scan to synthesize new scans that are realistic and accurately labeled.
From MIT News
View Full Article
Abstracts Copyright © 2019 SmithBucklin, Washington, DC, USA