0
Research Papers

Toward Safe Human Robot Collaboration by Using Multiple Kinects Based Real-Time Human Tracking

[+] Author and Article Information
Carlos Morato

Department of Mechanical Engineering &
Institute for Systems Research,
University of Maryland,
College Park, MD 20742
e-mail: cmorato@umd.edu

Krishnanand N. Kaipa

Department of Mechanical Engineering &
Institute for Systems Research,
University of Maryland,
College Park, MD 20742
e-mail: kkrishna@umd.edu

Boxuan Zhao

Department of Mechanical Engineering &
Institute for Systems Research,
University of Maryland,
College Park, MD 20742
e-mail: zhaoboxuan@gmail.com

Satyandra K. Gupta

Department of Mechanical Engineering &
Institute for Systems Research,
University of Maryland,
College Park, MD 20742
e-mail: skgupta@umd.edu

Collection of all points in the work cell that are occupied by the robot.

Angle between the sensor axis and the horizontal plane.

Ratio of area covered by the Kinect and total area of the workspace.

Ratio of workspace coverage and the total area of the work cell.

Angle between the Kinect axis and the side wall.

The Kinect index is omitted for brevity.

1Corresponding author.

Contributed by the Computers and Information Division of ASME for publication in the JOURNAL OF COMPUTING AND INFORMATION SCIENCE IN ENGINNERING. Manuscript received July 27, 2013; final manuscript received October 20, 2013; published online January 22, 2014. Assoc. Editor: Joshua D. Summers.

J. Comput. Inf. Sci. Eng 14(1), 011006 (Jan 22, 2014) (9 pages) Paper No: JCISE-13-1139; doi: 10.1115/1.4025810 History: Received July 27, 2013; Revised October 20, 2013

We present a multiple Kinects based exteroceptive sensing framework to achieve safe human-robot collaboration during assembly tasks. Our approach is mainly based on a real-time replication of the human and robot movements inside a physics-based simulation of the work cell. This enables the evaluation of the human-robot separation in a 3D Euclidean space, which can be used to generate safe motion goals for the robot. For this purpose, we develop an N-Kinect system to build an explicit model of the human and a roll-out strategy, in which we forward-simulate the robot's trajectory into the near future. Now, we use a precollision strategy that allows a human to operate in close proximity with the robot, while pausing the robot's motion whenever an imminent collision between the human model and any part of the robot is detected. Whereas most previous range based methods analyzed the physical separation based on depth data pertaining to 2D projections of robot and human, our approach evaluates the separation in a 3D space based on an explicit human model and a forward physical simulation of the robot. Real-time behavior (≈ 30 Hz) observed during experiments with a 5 DOF articulated robot and a human safely collaborating to perform an assembly task validate our approach.

FIGURES IN THIS ARTICLE
<>
Copyright © 2014 by ASME
Your Session has timed out. Please sign back in to continue.

References

Figures

Grahic Jump Location
Fig. 2

The Microsoft Kinect directly outputs a 20-joint model of a human observed in a 3D scene

Grahic Jump Location
Fig. 3

Coverage (horizontal projection) obtained by using four Kinect sensors. The blue-color regions are fully covered. Red- and white-colored regions represent the dead regions of the work cell.

Grahic Jump Location
Fig. 1

Overall system overview: (a) Work cell used to evaluate human-robot interaction. (b) 5 DOF robot used for the experiments. (c) Physical simulation used to evaluate the interference between the human and the robot in real-time.

Grahic Jump Location
Fig. 13

Robot and human collaborate to assemble the third part (radio box) onto the chassis

Grahic Jump Location
Fig. 4

Filter tracking performance along x axis

Grahic Jump Location
Fig. 5

Filter tracking performance along y axis

Grahic Jump Location
Fig. 6

Filter tracking performance along z axis

Grahic Jump Location
Fig. 7

Filter tracking performance in 3D

Grahic Jump Location
Fig. 8

Postures used to test the estimation accuracy of the overall system

Grahic Jump Location
Fig. 9

Discrepancy between projections of ground truth and estimated values on the XY plane

Grahic Jump Location
Fig. 10

Discrepancy between projections of ground truth and estimated values on the YZ plane

Grahic Jump Location
Fig. 11

Discrepancy between ground truth and estimated values for each joint averaged over 15 locations

Grahic Jump Location
Fig. 12

Illustration of precollision strategy: (a) Human is far away from the robot. As the distance between the spheres is significant, robot performs its intended task. (b) An imminent collision is detected by the system; therefore, the robot is paused and a visual alarm is raised (bounding spheres change color). (c) and (d) Human returns to a safety zone; therefore, the robot resumes its motion.

Tables

Errata

Discussions

Some tools below are only available to our subscribers or users with an online account.

Related Content

Customize your page view by dragging and repositioning the boxes below.

Related Journal Articles
Related eBook Content
Topic Collections

Sorry! You do not have access to this content. For assistance or to subscribe, please contact us:

  • TELEPHONE: 1-800-843-2763 (Toll-free in the USA)
  • EMAIL: asmedigitalcollection@asme.org
Sign In