
Reinforcement learning deals with the problem of how to map situations (states) to actions so as to maximize a numerical reward while interacting with dynamical and uncertain environment. Within the framework of Markov Decision Processes (MDPs) these methods are typically based on approximate dynamic programming using appropriate calculation/approximation of the value function. In this work we propose new algorithms for multi-agent distributed iterative value function approximation where the agents are allowed to have different behavior policies while evaluating the response to a single target policy. The algorithms assume linear parametrization of the value function and are based on consensus-based distributed stochastic approximation. Under appropriate assumptions on the time-varying network topology and the overall state-visiting distributions of the agents we prove weak convergence of the parameter estimates to the globally optimal point. It is demonstrated that the agents are able to together reach this solution even when the individual agents cannot.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 3 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
