Reinforcement Learning for Sequence Modeling that Maximizes Attention PROJECT TITLE : Optimizing Attention for Sequence Modeling via Reinforcement Learning ABSTRACT: It has been demonstrated that attention is very effective for modeling sequences, specifically for capturing the more informative parts in the process of learning a deep representation. Recent research has shown, however, that attention values do not always coincide with intuition in tasks such as machine translation and sentiment classification. This was found in the research. Within the scope of this investigation, we take into account the application of deep reinforcement learning to automatically optimize attention distribution during the process of minimizing end-task training losses. Iterative actions are taken so that attention weights can be adjusted, and as a result, more informative words will automatically receive more attention. This happens when the environment is in a more sufficient state. The findings from completing a variety of tasks using a variety of attention networks show that our model is very effective in terms of improving the overall performance of the tasks, resulting in a more reasonable attention distribution. Our method of retrofitting can help to bring explainability for baseline attention, as our more in-depth analysis further reveals. Did you like this research project? To get this research project Guidelines, Training and Code... Click Here facebook twitter google+ linkedin stumble pinterest PPD: A Parallel Primal-Dual Coordinate Descent Algorithm that is Scalable and Effective Clustering In Multiview Subspace With Grouping Effect