Efficient Model Learning Methods for Actor–Critic Control PROJECT TITLE :Efficient Model Learning Methods for Actor–Critic ControlABSTRACT:We propose two new actor–critic algorithms for reinforcement learning. Both algorithms use local linear regression (LLR) to learn approximations of the functions involved. A crucial feature of the algorithms is that they also learn a process model, and this, in combination with LLR, provides an efficient policy update for faster learning. The first algorithm uses a novel model-based update rule for the actor parameters. The second algorithm does not use an explicit actor but learns a reference model which represents a desired behavior, from which desired control actions can be calculated using the inverse of the learned process model. The two novel methods and a standard actor–critic algorithm are applied to the pendulum swing-up problem, in which the novel methods achieve faster learning than the standard algorithm. Did you like this research project? To get this research project Guidelines, Training and Code... Click Here facebook twitter google+ linkedin stumble pinterest An Optimization of Allocation of Information Granularity in the Interpretation of Data Structures: Toward Granular Fuzzy Clustering Deadlock-Free Genetic Scheduling Algorithm for Automated Manufacturing Systems Based on Deadlock Control Policy