Some Insights into the Geometry and Training of Neural Networks

Neural networks have been successfully used for classification tasks in a rapidly growing number of practical applications. Despite their popularity and widespread use, they are often still treated as enigmatic black boxes whose inner workings are insufficiently well understood. In this paper we provide new insights into training and classification by analyzing neural networks from a feature-space perspective. We explain the formation of decision regions and study some of their combinatorial aspects. We place a particular emphasis on the connections between the neural network weight and bias terms and properties of decision boundaries and other regions that exhibit varying levels of classification confidence. We show how the error backpropagates in these regions and emphasize the important role they have in the formation of gradients. These findings expose the connections between scaling of the weight parameters and the density of the training samples. This sheds more light on the vanishing gradient problem, explains the need for regularization, and suggests an approach for subsampling training data to improve performance.

By: E. van den Berg

Published in: RC25510 in 2014

rc25510.pdf

Questions about this service can be mailed to reports@us.ibm.com .