Swingler K (2014) A walsh analysis of multilayer perceptron function. In: Proceedings of the International Conference on Neural Computation Theory and Applications (IJCCI 2014). NCTA 2014: 6th International Conference on Neural Computation Theory and Applications, Rome, Italy, 22.10.2014-24.10.2014. Setubal, Portugal: Science and Technology Publications, pp. 5-14. http://www.scitepress.org/DigitalLibrary/Link.aspx?doi=10.5220/0004974800050014; https://doi.org/10.5220/0004974800050014
Abstract The multilayer perceptron (MLP) is a widely used neural network architecture, but it suffers from the fact that its knowledge representation is not readily interpreted. Hidden neurons take the role of feature detectors, but the popular learning algorithms (back propagation of error, for example) coupled with random starting weights mean that the function implemented by a trained MLP can be difficult to analyse. This paper proposes a method for understanding the structure of the function learned by MLPs that model functions of the class f : f1;1gn ! Rm. The approach characterises a given MLP using Walsh functions, which make the interactions among subsets of variables explicit. Demonstrations of this analysis used to monitor complexity during learning, understand function structure and measure the generalisation ability of trained networks are presented.
Keywords Multilayer Perceptrons; Walsh Functions; Network Function Analysis