publication . Other literature type . Article . Preprint . Conference object . 2020

Generative adversarial training of product of policies for robust and adaptive movement primitives

Pignat, Emmanuel; Girgin, Hakan; Calinon, Sylvain;
Open Access English
  • Published: 10 Oct 2020
  • Publisher: Zenodo
  • Country: Switzerland
Abstract
In learning from demonstrations, many generative models of trajectories make simplifying assumptions of independence. Correctness is sacrificed in the name of tractability and speed of the learning phase. The ignored dependencies, which often are the kinematic and dynamic constraints of the system, are then only restored when synthesizing the motion, which introduces possibly heavy distortions. In this work, we propose to use those approximate trajectory distributions as close-to-optimal discriminators in the popular generative adversarial framework to stabilize and accelerate the learning procedure. The two problems of adaptability and robustness are addressed with our method. In order to adapt the motions to varying contexts, we propose to use a product of Gaussian policies defined in several parametrized task spaces. Robustness to perturbations and varying dynamics is ensured with the use of stochastic gradient descent and ensemble methods to learn the stochastic dynamics. Two experiments are performed on a 7-DoF manipulator to validate the approach.
Comment: Source code can be found here : https://github.com/emmanuelpignat/tf_robot_learning
Subjects
free text keywords: Computer Science - Robotics
Related Organizations
Funded by
EC| CoLLaboratE
Project
CoLLaboratE
Co-production CeLL performing Human-Robot Collaborative AssEmbly
  • Funder: European Commission (EC)
  • Project Code: 820767
  • Funding stream: H2020 | RIA
,
EC| MEMMO
Project
MEMMO
Memory of Motion
  • Funder: European Commission (EC)
  • Project Code: 780684
  • Funding stream: H2020 | RIA
32 references, page 1 of 3

[1] A. Paraschos, C. Daniel, J. R. Peters, and G. Neumann. Probabilistic movement primitives. In Advances in Neural Information Processing Systems (NIPS), pages 2616-2624, 2013.

[2] P. Englert, A. Paraschos, M. P. Deisenroth, and J. Peters. Probabilistic model-based imitation learning. Adaptive Behavior, 21(5):388-403, 2013.

[3] S. Levine and V. Koltun. Guided policy search. In Proc. Intl Conf. on Machine Learning (ICML), pages 1-9, 2013.

[4] S. Calinon. A tutorial on task-parameterized movement learning and retrieval. Intelligent Service Robotics, 9(1):1-29, 2016.

[5] S. Calinon and A. Billard. Statistical learning by imitation of competing constraints in joint space and task space. Advanced Robotics, 23(15):2059-2076, 2009. [OpenAIRE]

[6] A. Paraschos, R. Lioutikov, J. Peters, and G. Neumann. Probabilistic prioritization of movement primitives. IEEE Robotics and Automation Letters, 2(4):2294-2301, 2017.

[7] S. Niekum, S. Osentoski, G. Konidaris, S. Chitta, B. Marthi, and A. G. Barto. Learning grounded finite-state representations from unstructured demonstrations. The International Journal of Robotics Research, 34(2):131-157, 2015.

[8] M. Mu¨hlig, M. Gienger, J. J. Steil, and C. Goerick. Automatic selection of task spaces for imitation learning. In Proc. IEEE/RSJ Intl Conf. on Intelligent Robots and Systems (IROS), pages 4996-5002, 2009. [OpenAIRE]

[9] G. E. Hinton. Training products of experts by minimizing contrastive divergence. Neural computation, 14(8):1771-1800, 2002.

[10] H. Zen, M. J. Gales, Y. Nankaku, and K. Tokuda. Product of experts for statistical parametric speech synthesis. IEEE Transactions on Audio, Speech, and Language Processing, 20(3): 794-805, 2012.

[11] B. D. Ziebart, A. L. Maas, J. A. Bagnell, and A. K. Dey. Maximum entropy inverse reinforcement learning. In AAAI, volume 8, pages 1433-1438. Chicago, IL, USA, 2008.

[12] C. Finn, S. Levine, and P. Abbeel. Guided cost learning: Deep inverse optimal control via policy optimization. In Proc. Intl Conf. on Machine Learning (ICML), pages 49-58, 2016.

[13] M. Kalakrishnan, P. Pastor, L. Righetti, and S. Schaal. Learning objective functions for manipulation. In Proc. IEEE Intl Conf. on Robotics and Automation (ICRA), pages 1331-1336, 2013.

[14] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems (NIPS), pages 2672-2680, 2014.

[15] C. Finn, P. Christiano, P. Abbeel, and S. Levine. A connection between generative adversarial networks, inverse reinforcement learning, and energy-based models. NeurIPS Workshop on Adversarial Training, 2016.

32 references, page 1 of 3
Any information missing or wrong?Report an Issue