For Inductive Semi-Supervised Learning Over Large-Scale Graphs, GAIN stands for Graph Attention & Interaction Network. PROJECT TITLE : GAIN: Graph Attention & Interaction Network for Inductive Semi-Supervised Learning Over Large-Scale Graphs ABSTRACT: The state-of-the-art performance on a variety of Machine Learning tasks, including recommendation, node classification, and link prediction, has been achieved with the help of graph neural networks, also known as GNNs. Graph neural network models produce node embeddings by combining the features of individual nodes with the information that is aggregated from those of their neighbors. The majority of the currently available GNN models make use of a single kind of aggregator, such as mean-pooling, to aggregate the information from the neighboring nodes. After that, they either add or concatenate the output of the aggregator to the existing representation vector of the center node. However, using only a single type of aggregator makes it challenging to capture the various aspects of neighboring information, and the fact that GNNs can only be updated through simple addition or concatenation severely restricts their capacity for expressiveness. In addition to this, existing supervised or semi-supervised GNN models are trained based on the loss function of the node label, which results in the neglect of information regarding the graph structure. In this paper, we propose a novel architecture for graph neural networks that we call the Graph Attention & Interaction Network (GAIN). This architecture is designed for inductive learning on graphs. In contrast to the earlier GNN models, which only made use of a single method of aggregation, we make use of a variety of aggregators to collect information about the surrounding area from a variety of perspectives, and we integrate the results of these aggregators by means of a mechanism that is called the aggregator-level attention mechanism. In addition, we design a graph regularized loss with the goal of better capturing the topological relationship that exists between the nodes in the graph. In addition, we begin by introducing the idea of graph feature interaction and then proceed to propose a vector-wise explicit feature interaction mechanism that can be used to update the node embeddings. We carry out in-depth tests on two node-classification benchmarks in addition to a real-world financial news dataset. According to the results of the experiments, our GAIN model is superior to the current state-of-the-art performances on all of the tasks. Did you like this research project? To get this research project Guidelines, Training and Code... Click Here facebook twitter google+ linkedin stumble pinterest Learning the global negative correlation A Unified Framework for Global Ensemble Model Optimization Equitable Semi-supervised Learning Unlabeled Data Aid in the Decrease of Discrimination