publication . Conference object . Preprint . 2017

Deep reinforcement learning for high precision assembly tasks

Inoue, Tadanobu; De Magistris, Giovanni; Munawar, Asim; Yokoya, Tsuyoshi; Tachibana, Ryuki;
Open Access
  • Published: 14 Aug 2017
  • Publisher: IEEE
Abstract
High precision assembly of mechanical parts requires accuracy exceeding the robot precision. Conventional part mating methods used in the current manufacturing requires tedious tuning of numerous parameters before deployment. We show how the robot can successfully perform a tight clearance peg-in-hole task through training a recurrent neural network with reinforcement learning. In addition to saving the manual effort, the proposed technique also shows robustness against position and angle errors for the peg-in-hole task. The neural network learns to take the optimal action by observing the robot sensors to estimate the system state. The advantages of our propose...
Subjects
free text keywords: Computer Science - Robotics, Computer Science - Artificial Intelligence
Related Organizations

[1] J. Kober, J. A. Bagnell, and J. Peters, Reinforcement learning in robotics: A survey, International Journal of Robotic Research, vol.32, no.11, pp.12381274, 2013.

[2] S. Levine, P. Pastor, A. Krizhevsky, D. Quillen, ”Learning Hand-Eye Coordination for Robotic Grasping with Deep Learning and LargeScale Data Collection”, International Symposium on Experimental Robotics (ISER), 2016.

[3] L. Pinto, A. Gupta, ”Supersizing self-supervision: Learning to grasp from 50k tries and 700 robot hours”, IEEE International Conference on Robotics and Automation (ICRA), 2016.

[4] K. Sharma, V. Shirwalkar, and P. K. Pal, ”Intelligent and EnvironmentIndependent Peg-In-Hole Search Strategies”, International Conference on Control, Automation, Robotics and Embedded Systems (CARE), 2013.

[5] W. S. Newman, Y. Zhao, and Y. H. Pao, ”Interpretation of Force and Moment Signals for Compliant Peg-in-Hole Assembly”, IEEE International Conference on Robotics and Automation, 2001. [OpenAIRE]

[6] C. Bouchard, M. Nesme, M. Tournier, B. Wang, F. Faure, and P. G. Kry, ”6D Frictional Contact for Rigid Bodies”, Proceedings of Graphics Interface, 2015.

[7] V. Gullapalli, R. A. Grupen, and A. G. Barto, ”Learning Reactive Admittance Control”, IEEE International Conference on Robotics and Automation, 1992. [OpenAIRE]

[8] M. D. Majors, and R. J. Richards, ”A Neural Network Based Flexible Assembly Controller”, Fourth International Conference on Artificial Neural Networks, 1995.

[9] I. W. Kim, D. J. Lim, and K. I. Kim, ”Active Peg-in-hole of Chamferless Parts using Force/Moment Sensor”, IEEE/RSJ International Conference on Intelligent Robots and Systems, 1999.

[10] T. Tang, H. C. Lin, Y. Zhao, W. Chen, and M. Tomizuka, ”Autonomous Alignment of Peg and Hole by Force/Torque Measurement for Robotic Assembly”, IEEE International Conference on Automation Science and Engineering (CASE), 2016.

[11] V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller, ”Playing Atari with Deep Reinforcement Learning”, NIPS Deep Learning Workshop, 2013. [OpenAIRE]

[12] B. Bakker, ”Reinforcement Learning with Long Short-Term Memory”, 14th International Conference Neural Information Processing Systems (NIPS), 2001.

[13] Yaskawa Europe GmbH, Motofit, “https://www.yaskawa.eu.com/index. php?eID=dumpFile&t=f&f=11644&token= 241c4282605991b04d445f52399c614c3192d811.”

[14] V. Mnih, A. P. Badia, M. Mirza, A. Graves, T. P. Lillicrap, T. Harley, D. Silver, and K. Kavukcuoglu, ”Asynchronous Methods for Deep Reinforcement Learning”, International Conference on Machine Learning, 2016. [OpenAIRE]

[15] T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra, ”Continuous control with deep reinforcement learning”, arXiv:1509.02971, 2015.

Powered by OpenAIRE Research Graph
Any information missing or wrong?Report an Issue