publication . Preprint . 2019

REPLAB: A Reproducible Low-Cost Arm Benchmark Platform for Robotic Learning

Yang, Brian; Zhang, Jesse; Pong, Vitchyr; Levine, Sergey; Jayaraman, Dinesh;
Open Access English
  • Published: 17 May 2019
Abstract
Standardized evaluation measures have aided in the progress of machine learning approaches in disciplines such as computer vision and machine translation. In this paper, we make the case that robotic learning would also benefit from benchmarking, and present the "REPLAB" platform for benchmarking vision-based manipulation tasks. REPLAB is a reproducible and self-contained hardware stack (robot arm, camera, and workspace) that costs about 2000 USD, occupies a cuboid of size 70x40x60 cm, and permits full assembly within a few hours. Through this low-cost, compact design, REPLAB aims to drive wide participation by lowering the barrier to entry into robotics and to ...
Subjects
free text keywords: Computer Science - Robotics, Computer Science - Computer Vision and Pattern Recognition, Computer Science - Machine Learning
Download from
38 references, page 1 of 3

[1] V. C. Mu¨ller, “Measuring progress in robotics: Benchmarking and the measure-target confusion,” in Metrics of sensory motor coordination and integration in robots and animals, 2018.

[2] A. Torralba and A. A. Efros, “Unbiased look at dataset bias,” in Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on. IEEE, 2011, pp. 1521-1528. [OpenAIRE]

[3] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in CVPR, 2009.

[4] V. Zue, S. Seneff, and J. Glass, “Speech database development at mit: Timit and beyond,” Speech communication, 1990. [OpenAIRE]

[5] D. D. Lewis, Y. Yang, T. G. Rose, and F. Li, “RCV1: A new benchmark collection for text categorization research,” JMLR, 2004.

[6] I. Lenz, H. Lee, and A. Saxena, “Deep learning for detecting robotic grasps,” IJRR, 2015.

[7] S. Levine, P. Pastor, A. Krizhevsky, J. Ibarz, and D. Quillen, “Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection,” IJRR, 2018.

[8] L. Pinto and A. Gupta, “Supersizing self-supervision: Learning to grasp from 50k tries and 700 robot hours,” in ICRA, 2016.

[9] M. Gualtieri, A. ten Pas, K. Saenko, and R. Platt, “High precision grasp pose detection in dense clutter,” in IROS, 2016.

[10] U. Viereck, A. Pas, K. Saenko, and R. Platt, “Learning a visuomotor controller for real world robotic grasping using simulated depth images,” in CORL, 2017. [OpenAIRE]

[11] J. Mahler, J. Liang, S. Niyaz, M. Laskey, R. Doan, X. Liu, J. A. Ojea, and K. Goldberg, “Dex-net 2.0: Deep learning to plan robust grasps with synthetic point clouds and analytic grasp metrics,” 2017.

[12] R. Calandra, A. Owens, M. Upadhyaya, W. Yuan, J. Lin, E. H. Adelson, and S. Levine, “The feeling of success: Does touch sensing help predict grasp outcomes?” CORL, 2017.

[13] A. Zeng, S. Song, K.-T. Yu, E. Donlon, F. R. Hogan, M. Bauza, D. Ma, O. Taylor, M. Liu, E. Romo, et al., “Robotic pick-and-place of novel objects in clutter with multi-affordance grasping and cross-domain image matching,” arXiv preprint arXiv:1710.01330, 2017.

[14] R. Calandra, A. Owens, D. Jayaraman, J. Lin, W. Yuan, J. Malik, E. H. Adelson, and S. Levine, “More than a feeling: Learning to grasp and regrasp using vision and touch,” RAL, 2018.

[15] M. P. Deisenroth, “Learning to control a low-cost manipulator using data-efficient reinforcement learning,” RSS, 2012.

38 references, page 1 of 3
Related research
Abstract
Standardized evaluation measures have aided in the progress of machine learning approaches in disciplines such as computer vision and machine translation. In this paper, we make the case that robotic learning would also benefit from benchmarking, and present the "REPLAB" platform for benchmarking vision-based manipulation tasks. REPLAB is a reproducible and self-contained hardware stack (robot arm, camera, and workspace) that costs about 2000 USD, occupies a cuboid of size 70x40x60 cm, and permits full assembly within a few hours. Through this low-cost, compact design, REPLAB aims to drive wide participation by lowering the barrier to entry into robotics and to ...
Subjects
free text keywords: Computer Science - Robotics, Computer Science - Computer Vision and Pattern Recognition, Computer Science - Machine Learning
Download from
38 references, page 1 of 3

[1] V. C. Mu¨ller, “Measuring progress in robotics: Benchmarking and the measure-target confusion,” in Metrics of sensory motor coordination and integration in robots and animals, 2018.

[2] A. Torralba and A. A. Efros, “Unbiased look at dataset bias,” in Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on. IEEE, 2011, pp. 1521-1528. [OpenAIRE]

[3] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in CVPR, 2009.

[4] V. Zue, S. Seneff, and J. Glass, “Speech database development at mit: Timit and beyond,” Speech communication, 1990. [OpenAIRE]

[5] D. D. Lewis, Y. Yang, T. G. Rose, and F. Li, “RCV1: A new benchmark collection for text categorization research,” JMLR, 2004.

[6] I. Lenz, H. Lee, and A. Saxena, “Deep learning for detecting robotic grasps,” IJRR, 2015.

[7] S. Levine, P. Pastor, A. Krizhevsky, J. Ibarz, and D. Quillen, “Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection,” IJRR, 2018.

[8] L. Pinto and A. Gupta, “Supersizing self-supervision: Learning to grasp from 50k tries and 700 robot hours,” in ICRA, 2016.

[9] M. Gualtieri, A. ten Pas, K. Saenko, and R. Platt, “High precision grasp pose detection in dense clutter,” in IROS, 2016.

[10] U. Viereck, A. Pas, K. Saenko, and R. Platt, “Learning a visuomotor controller for real world robotic grasping using simulated depth images,” in CORL, 2017. [OpenAIRE]

[11] J. Mahler, J. Liang, S. Niyaz, M. Laskey, R. Doan, X. Liu, J. A. Ojea, and K. Goldberg, “Dex-net 2.0: Deep learning to plan robust grasps with synthetic point clouds and analytic grasp metrics,” 2017.

[12] R. Calandra, A. Owens, M. Upadhyaya, W. Yuan, J. Lin, E. H. Adelson, and S. Levine, “The feeling of success: Does touch sensing help predict grasp outcomes?” CORL, 2017.

[13] A. Zeng, S. Song, K.-T. Yu, E. Donlon, F. R. Hogan, M. Bauza, D. Ma, O. Taylor, M. Liu, E. Romo, et al., “Robotic pick-and-place of novel objects in clutter with multi-affordance grasping and cross-domain image matching,” arXiv preprint arXiv:1710.01330, 2017.

[14] R. Calandra, A. Owens, D. Jayaraman, J. Lin, W. Yuan, J. Malik, E. H. Adelson, and S. Levine, “More than a feeling: Learning to grasp and regrasp using vision and touch,” RAL, 2018.

[15] M. P. Deisenroth, “Learning to control a low-cost manipulator using data-efficient reinforcement learning,” RSS, 2012.

38 references, page 1 of 3
Related research
Powered by OpenAIRE Research Graph
Any information missing or wrong?Report an Issue