Learning Understandable Neural Networks With Nonnegative Weight Constraints - 2015
People can understand complex structures if they relate to a lot of isolated however understandable ideas. Despite this truth, popular pattern recognition tools, like call tree or production rule learners, turn out solely flat models which don't build intermediate information representations. On the opposite hand, neural networks usually learn hierarchical however opaque models. We show how constraining neurons' weights to be nonnegative improves the interpretability of a network's operation. We have a tendency to analyze the proposed technique on massive data sets: the MNIST digit recognition knowledge and the Reuters text categorization data. The patterns learned by traditional and constrained network are contrasted to those learned with principal component analysis and nonnegative matrix factorization.
Did you like this research project?
To get this research project Guidelines, Training and Code... Click Here