0
Research Papers

Motion Imitation Based on Sparsely Sampled Correspondence

[+] Author and Article Information
Shuo Jin

Department of Mechanical and
Automation Engineering,
The Chinese University of Hong Kong,
Hong Kong 999077, China
e-mail: jerry.shuojin@gmail.com

Chengkai Dai

Department of Mechanical and
Automation Engineering,
The Chinese University of Hong Kong,
Hong Kong 999077, China
e-mail: ckdai@mae.cuhk.edu.hk

Yang Liu

Microsoft Research Asia,
Beijing 100080, China
e-mail: yangliu@microsoft.com

Charlie C. L. Wang

Department of Design Engineering and
TU Delft Robotics Institute,
Delft University of Technology,
Delft 2628, The Netherlands
e-mail: c.c.wang@tudelft.nl

1Corresponding author.

Contributed by the Computers and Information Division of ASME for publication in the JOURNAL OF COMPUTING AND INFORMATION SCIENCE IN ENGINEERING. Manuscript received August 16, 2016; final manuscript received May 2, 2017; published online June 15, 2017. Editor: Bahram Ravani.

J. Comput. Inf. Sci. Eng 17(4), 041009 (Jun 15, 2017) (7 pages) Paper No: JCISE-16-2045; doi: 10.1115/1.4036923 History: Received August 16, 2016; Revised May 02, 2017

Existing techniques for motion imitation often suffer a certain level of latency due to their computational overhead or a large set of correspondence samples to search. To achieve real-time imitation with small latency, we present a framework in this paper to reconstruct motion on humanoids based on sparsely sampled correspondence. The imitation problem is formulated as finding the projection of a point from the configuration space of a human's poses into the configuration space of a humanoid. An optimal projection is defined as the one that minimizes a back-projected deviation among a group of candidates, which can be determined in a very efficient way. Benefited from this formulation, effective projections can be obtained by using sparsely sampled correspondence, whose generation scheme is also introduced in this paper. Our method is evaluated by applying the human's motion captured by an RGB-depth (RGB-D) sensor to a humanoid in real time. Continuous motion can be realized and used in the example application of teleoperation.

FIGURES IN THIS ARTICLE
<>
Copyright © 2017 by ASME
Your Session has timed out. Please sign back in to continue.

References

Suleiman, W. , Yoshida, E. , Kanehiro, F. , Laumond, J.-P. , and Monin, A. , 2008, “ On Human Motion Imitation by Humanoid Robot,” IEEE International Conference on Robotics and Automation (ICRA), Pasadena, CA, May 19–23, pp. 2697–2704.
Chalodhorn, R. , Grimes, D. B. , Grochow, K. , and Rao, R. P. , 2007, “ Learning to Walk Through Imitation,” 20th International Joint Conference on Artifical Intelligence (IJCAI), Hyderabad, India, Jan. 6–12, Vol. 7, pp. 2084–2090.
Nakaoka, S. , Nakazawa, A. , Kanehiro, F. , Kaneko, K. , Morisawa, M. , Hirukawa, H. , and Ikeuchi, K. , 2007, “ Learning From Observation Paradigm: Leg Task Models for Enabling a Biped Humanoid Robot to Imitate Human Dances,” Int. J. Rob. Res., 26(8), pp. 829–844. [CrossRef]
Ude, A. , Atkeson, C. G. , and Riley, M. , 2004, “ Programming Full-Body Movements for Humanoid Robots by Observation,” Rob. Auton. Syst., 47(2–3), pp. 93–108. [CrossRef]
Kim, S. , Kim, C. , You, B. , and Oh, S. , 2009, “ Stable Whole-Body Motion Generation for Humanoid Robots to Imitate Human Motions,” IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), St. Louis, MO, Oct. 10–15, pp. 2518–2524.
Safonova, A. , Pollard, N. S. , and Hodgins, J. K. , 2003, “ Optimizing Human Motion for the Control of a Humanoid Robot,” 2nd International Symposium on Adaptive Motion of Animals and machines (AMAM), Kyoto, Japan, Mar. 4–8.
Ott, C. , Lee, D. , and Nakamura, Y. , 2008, “ Motion Capture Based Human Motion Recognition and Imitation by Direct Marker Control,” Eighth IEEE-RAS International Conference on Humanoid Robots, Daejeon, South Korea, Dec. 1–3, pp. 399–405.
Dariush, B. , Gienger, M. , Arumbakkam, A. , Zhu, Y. , Jian, B. , Fujimura, K. , and Goerick, C. , 2009, “ Online Transfer of Human Motion to Humanoids,” Int. J. Humanoid Rob., 6(2), pp. 265–289. [CrossRef]
Do, M. , Azad, P. , Asfour, T. , and Dillmann, R. , 2008, “ Imitation of Human Motion on a Humanoid Robot Using Non-Linear Optimization,” Eighth IEEE-RAS International Conference on Humanoid Robots, Daejeon, South Korea, Dec. 1–3, pp. 545–552.
Yamane, K. , Anderson, S. O. , and Hodgins, J. K. , 2010, “ Controlling Humanoid Robots With Human Motion Data: Experimental Validation,” Tenth IEEE-RAS International Conference on Humanoid Robots, Nashville, TN, Dec. 6–8, pp. 504–510.
Koenemann, J. , and Bennewitz, M. , 2012, “ Whole-Body Imitation of Human Motions With a Nao Humanoid,” Seventh ACM/IEEE International Conference on Human-Robot Interaction (HRI), Boston, MA, Mar. 5–8, pp. 425–425.
Koenemann, J. , Burget, F. , and Bennewitz, M. , 2014, “ Real-Time Imitation of Human Whole-Body Motions by Humanoids,” IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, May 31–June 7, pp. 2806–2812.
Morris, A. S. , and Mansor, A. , 1997, “ Finding the Inverse Kinematics of Manipulator Arm Using Artificial Neural Network With Lookup Table,” Robotica, 26(6), pp. 617–625. [CrossRef]
Aleotti, J. , Skoglund, A. , and Duckett, T. , 2004, “ Position Teaching of a Robot Arm by Demonstration With a Wearable Input Device,” International Conference on Intelligent Manipulation and Grasping (IMG), Genova, Italy, July 1–2.
Neto, P. , Pires, J. N. , and Moreira, A. P. , 2009, “ Accelerometer-Based Control of an Industrial Robotic Arm,” 18th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Toyama, Japan, Sept. 27–Oct. 2, pp. 1192–1197.
Neto, P. , Pires, J. N. , and Moreira, A. P. , 2010, “ High-Level Programming and Control for Industrial Robotics: Using a Hand-Held Accelerometer-Based Input Device for Gesture and Posture Recognition,” Ind. Rob.: Int. J., 37(2), pp. 137–147. [CrossRef]
Stanton, C. , Bogdanovych, A. , and Ratanasena, E. , 2012, “ Teleoperation of a Humanoid Robot Using Full-Body Motion Capture, Example Movements, and Machine Learning,” Australasian Conference on Robotics and Automation (ACRA), Wellington, New Zealand, Dec. 3–5, pp. 260–269.
Van der Smagt, P. , and Schulten, K. , 1993, “ Control of Pneumatic Robot Arm Dynamics by a Neural Network,” World Congress on Neural Networks, Portland, OR, July 11–15, Vol. 3, pp. 180–183.
Jung, S. , and Hsia, T. , 1996, “ Neural Network Reference Compensation Technique for Position Control of Robot Manipulators,” IEEE International Conference on Neural Networks (ICNN), Washington, DC, June 3–6, Vol. 3, pp. 1765–1770.
Larsen, J. C. , and Ferrier, N. J. , 2004, “ A Case Study in Vision Based Neural Network Training for Control of a Planar, Large Deflection, Flexible Robot Manipulator,” IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Sendai, Japan, Sept. 28–Oct. 2, Vol. 3, pp. 2924–2929.
Wang, D. , and Bai, Y. , 2005, “ Improving Position Accuracy of Robot Manipulators Using Neural Networks,” IEEE Instrumentation and Measurement Technology Conference (IMTC), Ottawa, ON, Canada, May 16–19, Vol. 2, pp. 1524–1526.
Comaniciu, D. , and Meer, P. , 2002, “ Mean Shift: A Robust Approach Toward Feature Space Analysis,” IEEE Trans. Pattern Anal. Mach. Intell., 24(5), pp. 603–619. [CrossRef]
Huang, G.-B. , Wang, D. H. , and Lan, Y. , 2011, “ Extreme Learning Machines: A Survey,” Int. J. Mach. Learn. Cybern., 2(2), pp. 107–122. [CrossRef]
Huang, G.-B. , Zhou, H. , Ding, X. , and Zhang, R. , 2012, “ Extreme Learning Machine for Regression and Multiclass Classification,” IEEE Trans. Syst., Man, Cybern., Part B: Cybern., 26(2), pp. 513–529.
Deng, W. , Zheng, Q. , and Chen, L. , 2009, “ Regularized Extreme Learning Machine,” IEEE Symposium on Computational Intelligence and Data Mining (CIDM), Nashville, TN, Mar. 30–Apr. 2, pp. 389–395.
LaViola, J. J. , 2003, “ Double Exponential Smoothing: An Alternative to Kalman Filter-Based Predictive Tracking,” ACM Workshop on Virtual Environments, Zurich, Switzerland, May 22–23, pp. 199–206.
Lapeyre, M., Rouanet, P., Grizou, J., Nguyen, S., Depraetre, F., Le Falher, A., and Oudeyer, P.-Y., 2014, “ Poppy Project: Open-Source Fabrication of 3D Printed Humanoid Robot for Science, Education and Art,” Digital Intelligence (DI2014), Nantes, France, Sept. 17–19.
Yeung, K.-Y. , Kwok, T.-H. , and Wang, C. C. , 2013, “ Improved Skeleton Tracking by Duplex Kinects: A Practical Approach for Real-Time Applications,” ASME J. Comput. Inf. Sci. Eng., 13(4), p. 041007. [CrossRef]
Zheng, Y. , Chan, K. C. , and Wang, C. C. L. , 2014, “ Pedalvatar: An IMU-Based Real-Time Body Motion Capture System Using Foot Rooted Kinematic Model,” IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Chicago, IL, Sept. 14–18, pp. 4130–4135.

Figures

Grahic Jump Location
Fig. 1

An illustration of our framework for motion imitation using configuration projection

Grahic Jump Location
Fig. 2

An example of imitation realized by our framework working with the Nao humanoid

Grahic Jump Location
Fig. 3

An illustration of finding an optimal point that minimizes a back-projected deviation (with L=M=4)

Grahic Jump Location
Fig. 4

Feature vectors of human and humanoid: (a) the human skeleton from a Kinect sensor, (b) the corresponding pose descriptor of a human body consists of 19 unit vectors, and (c) the pose descriptor for an Nao humanoid formed by all DOFs on its joints

Grahic Jump Location
Fig. 5

Basic poses serve as benchmarks for similarity evaluation

Grahic Jump Location
Fig. 6

Eight basic poses are reconstructed by our method (left of each pair) and compared with the ground truth (right of each pair). The similarity metrics, Mmax and Mavg, of each pair are also reported. The evaluation is taken on a projection defined by using 1644 landmark pairs.

Grahic Jump Location
Fig. 7

Statistics in eight motions for the change of two metrics in degree: Mmax (the upper curves in each sub-figures) and Mavg (the lower curves in sub-figures). The evaluation is also taken on a projection with 1644 landmark pairs.

Grahic Jump Location
Fig. 8

To reconstruct motion using landmark sets with different number of corresponding samples, statistics of Mmax and Mavg in degree indicate that more landmark pairs lead to better results

Grahic Jump Location
Fig. 9

Application of teleoperation using Nao humanoid: picking up a ring and putting it into a box (left), and lifting up a poster by two hands (right)

Grahic Jump Location
Fig. 10

Tests on the lab-made Poppy humanoid: full body motion (left) and simultaneous imitation (right) in the heterogeneous environment with an Nao and a Poppy

Tables

Errata

Discussions

Some tools below are only available to our subscribers or users with an online account.

Related Content

Customize your page view by dragging and repositioning the boxes below.

Related Journal Articles
Related eBook Content
Topic Collections

Sorry! You do not have access to this content. For assistance or to subscribe, please contact us:

  • TELEPHONE: 1-800-843-2763 (Toll-free in the USA)
  • EMAIL: asmedigitalcollection@asme.org
Sign In