1. D. Andersson. Extending friedmann's lower bound to the hoffman-karp algorithm. Preprint, June 2009.
2. D. Andersson, T. D. Hansen, and P. B. Miltersen. Toward better bounds on policy iteration. Preprint, June 2009.
3. R. B. Ash and C. A. Dol´eans-Dade. Probability and Measure Theory. Academic Press, 2000.
4. O. Friedmann. A super-polynomial lower bound for the parity game strategy improvement algorithm as we know it. In Logic in Computer Science (LICS). IEEE, 2009.
5. R. Howard. Dynamic Programming and Markov Processes. Technology Press and Wiley, 1960.
6. Y. Mansour and S. P. Singh. On the complexity of policy iteration. In K. B. Laskey and H. Prade, editors, UAI '99: Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence, pages 401-408. Morgan Kaufmann, 1999.
7. M. Melekopoglou and A. Condon. On the complexity of the policy improvement algorithm for markov decision processes. ORSA Journal on Computing, 6:188-192, 1994.
8. M. L. Puterman. Markov Decision Processes: Discrete Stochastic Dynamic Programming. John Wiley & Sons, Inc. New York, NY, USA, 1994.
9. J. V¨oge and M. Jurdzin´ski. A discrete strategy improvement algorithm for solving parity games (Extended abstract). In E. A. Emerson and A. P. Sistla, editors, Computer Aided Verification, 12th International Conference, CAV 2000, Proceedings, volume 1855 of LNCS, pages 202-215, Chicago, IL, USA, 2000. Springer-Verlag.