A Two-Layer Recurrent Neural Network for Nonsmooth Convex Optimization Problems - 2015
During this paper, a 2-layer recurrent neural network is proposed to resolve the nonsmooth convex optimization drawback subject to convex inequality and linear equality constraints. Compared with existing neural network models, the proposed neural network features a low model complexity and avoids penalty parameters. It is proved that from any initial purpose, the state of the proposed neural network reaches the equality possible region in finite time and stays there thereafter. Moreover, the state is distinctive if the initial purpose lies in the equality feasible region. The equilibrium point set of the proposed neural network is proved to be comparable to the Karush-Kuhn-Tucker optimality set of the initial optimization downside. It is further proved that the equilibrium purpose of the proposed neural network is stable in the sense of Lyapunov. Moreover, from any initial purpose, the state is proved to be convergent to an equilibrium point of the proposed neural network. Finally, as applications, the proposed neural network is used to unravel nonlinear convex programming with linear constraints and L1-norm minimization problems.
Did you like this research project?
To get this research project Guidelines, Training and Code... Click Here