Researchers at Lawrence Berkeley National Laboratory (Berkeley Lab) have developed a Mixed-Scale Dense Convolution Neural Network, a system that requires fewer parameters and training images when working toward image-recognition technology.
A typical neural network is comprised of layers, each of which performs a specific analysis--one layer informs the next layer, so relevant information must be copied and passed along. Standard practice involves looking at fine-scale information in the early layers and large-scale information in the later layers.
However, the new system mixes different scales within each layer, says Berkeley Lab's Daniel Pelt. This means large-scale information is analyzed earlier along with fine-scale information, enabling the algorithm to focus on the relevant fine-grain details.
In addition, the layers in the new system are densely connected, meaning information does not have to be copied repeatedly throughout the network, and earlier layers can communicate relevant information directly to layers later in the series.
From Government Computer News
View Full Article
Abstracts Copyright © 2018 Information Inc., Bethesda, Maryland, USA