In Graph Neural Networks, Deep Constraint-based Propagation PROJECT TITLE : Deep Constraint-based Propagation in Graph Neural Networks ABSTRACT: Graph Neural Networks have inspired a resurgence in interest in neural architectures that are capable of processing complex structures that can be represented using graphs. This interest was rekindled by the popularity of Deep Learning techniques (GNNs). Our attention is focused on the GNN model that was initially proposed by Scarselli et al. in 2009. This model encodes the state of the nodes in the graph through the use of an iterative diffusion procedure that, during the learning stage, needs to be computed at every epoch, until the fixed point of a learnable state transition function is reached, and then it propagates the information among the nodes that are neighboring it. A novel method for learning in GNNs, based on constrained optimization in the Lagrangian framework, is one that we propose here. The learning of the transition function as well as the states of the nodes is the result of a joint process. During this process, the state convergence procedure is implicitly expressed by a constraint satisfaction mechanism. This helps to avoid iterative epoch-wise procedures as well as the unfolding of the network. The computational structure that we have developed looks for saddle points of the Lagrangian in the adjoint space that is made up of weights, node state variables, and Lagrange multipliers. This process is further improved by multiple layers of constraints, which speed up the process of diffusion. An experimental analysis demonstrates that the proposed method performs favorably when compared to well-known models with regard to a number of benchmarks. Did you like this research project? To get this research project Guidelines, Training and Code... Click Here facebook twitter google+ linkedin stumble pinterest Unsupervised Domain Adaptation Using a Deep Ladder-Suppression Network A Survey on Database and Artificial Intelligence