PROJECT TITLE :
Universal Approximation with Convex Optimization: Gimmick or Reality? [Discussion Forum]
This paper surveys during a tutorial fashion the recent history of universal learning machines starting with the multilayer perceptron. The massive push in recent years has been on the design of universal learning machines using optimization ways linear in the parameters, like the Echo State Network, the Extreme Learning Machine and therefore the Kernel Adaptive filter. We tend to call this class of learning machines convex universal learning machines or CULMs. The purpose of the paper is to match the ways behind these CULMs, highlighting their features using ideas of vector areas (i.e. basis functions and projections), that are straightforward to understand by the computational intelligence community. We illustrate how two of the CULMs behave in a very straightforward example, and we have a tendency to conclude that indeed it is sensible to form universal mappers with convex adaptation, that is an improvement over backpropagation.
Did you like this research project?
To get this research project Guidelines, Training and Code... Click Here