
We formulate an efficient approximation for multi-agent batch reinforcement learning, the approximated multi-agent fitted Q iteration (AMAFQI). We present a detailed derivation of our approach. We propose an iterative policy search and show that it yields a greedy policy with respect to multiple approximations of the centralized, learned Q-function. In each iteration and policy evaluation, AMAFQI requires a number of computations that scales linearly with the number of agents whereas the analogous number of computations increase exponentially for the fitted Q iteration (FQI), a commonly used approaches in batch reinforcement learning. This property of AMAFQI is fundamental for the design of a tractable multi-agent approach. We evaluate the performance of AMAFQI and compare it to FQI in numerical simulations. The simulations illustrate the significant computation time reduction when using AMAFQI instead of FQI in multi-agent problems and corroborate the similar performance of both approaches.
FOS: Computer and information sciences, Computer Science - Machine Learning, Markov and semi-Markov decision processes, multi-agent reinforcement learning, Multi-agent systems, Systems and Control (eess.SY), Dynamic programming, Electrical Engineering and Systems Science - Systems and Control, Machine Learning (cs.LG), batch reinforcement learning, FOS: Electrical engineering, electronic engineering, information engineering, Stochastic systems in control theory (general), approximate dynamic programming, Markov decision process
FOS: Computer and information sciences, Computer Science - Machine Learning, Markov and semi-Markov decision processes, multi-agent reinforcement learning, Multi-agent systems, Systems and Control (eess.SY), Dynamic programming, Electrical Engineering and Systems Science - Systems and Control, Machine Learning (cs.LG), batch reinforcement learning, FOS: Electrical engineering, electronic engineering, information engineering, Stochastic systems in control theory (general), approximate dynamic programming, Markov decision process
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
