0
Research Papers

Design of Hybrid Cells to Facilitate Safe and Efficient Human–Robot Collaboration During Assembly Operations

[+] Author and Article Information
Krishnanand N. Kaipa

Department of Mechanical
and Aerospace Engineering,
Old Dominion University,
Norfolk, VA 23529
e-mail: kkaipa@odu.edu

Carlos W. Morato

ABB Corporate Research Center ABB Inc.,
Windsor, CT 06065
e-mail: carlos.morato@us.abb.com

Satyandra K. Gupta

Center for Advanced Manufacturing,
University of Southern California,
Los Angeles, CA 90089-1453
e-mail: guptask@usc.edu

1Corresponding author.

Contributed by the Computer-Aided Product Development Committee of ASME for publication in the JOURNAL OF COMPUTING AND INFORMATION SCIENCE IN ENGINEERING. Manuscript received October 26, 2017; final manuscript received January 10, 2018; published online June 12, 2018. Special Editor: Jitesh H. Panchal.

J. Comput. Inf. Sci. Eng 18(3), 031004 (Jun 12, 2018) (11 pages) Paper No: JCISE-17-1246; doi: 10.1115/1.4039061 History: Received October 26, 2017; Revised January 10, 2018

This paper presents a framework to build hybrid cells that support safe and efficient human–robot collaboration during assembly operations. Our approach allows asynchronous collaborations between human and robot. The human retrieves parts from a bin and places them in the robot's workspace, while the robot picks up the placed parts and assembles them into the product. We present the design details of the overall framework comprising three modules—plan generation, system state monitoring, and contingency handling. We describe system state monitoring and present a characterization of the part tracking algorithm. We report results from human–robot collaboration experiments using a KUKA robot and a three-dimensional (3D)-printed mockup of a simplified jet-engine assembly to illustrate our approach.

FIGURES IN THIS ARTICLE
<>
Copyright © 2018 by ASME
Your Session has timed out. Please sign back in to continue.

References

Morato, C. , Kaipa, K. N. , Zhao, B. , and Gupta, S. K. , 2014, “Toward Safe Human Robot Collaboration by Using Multiple Kinects Based Real-Time Human Tracking,” ASME J. Comput. Inf. Sci. Eng., 14(1), p. 011006. [CrossRef]
Morato, C. , Kaipa, K. N. , Liu, J. , and Gupta, S. K. , 2014, “A Framework for Hybrid Cells That Support Safe and Efficient Human-Robot Collaboration in Assembly Operations,” ASME Paper No. DETC2014-34671.
Morato, C. , Kaipa, K. N. , and Gupta, S. K. , 2017, “System State Monitoring to Facilitate Safe and Efficient Human-Robot Collaboration in Hybrid Assembly Cells,” ASME Paper No. DETC2017-68269.
Bauer, A. , Wollherr, D. , and Buss, M. , 2008, “Human-Robot Collaboration: A Survey,” Int. J. Humanoid Rob., 5(1), pp. 47–66. [CrossRef]
Shi, J. , Jimmerson, G. , Pearson, T. , and Menassa, R. , 2012, “Levels of Human and Robot Collaboration for Automotive Manufacturing,” Workshop on Performance Metrics for Intelligent Systems (PerMIS), College Park, MD, Mar. 20–22, pp. 95–100.
Cherubini, A. , Passama, R. , Crosnier, A. , Lasnier, A. , and Fraisse, P. , 2016, “Collaborative Manufacturing With Physical Human-Robot Interaction,” Rob. Comput.-Integr. Manuf., 40, pp. 1–13. [CrossRef]
Sadrfaridpour, B. , and Wang, Y. , 2017, “Collaborative Assembly in Hybrid Manufacturing Cells: An Integrated Framework for Human-Robot Interaction,” IEEE Trans. Autom. Sci. Eng., PP(99), pp. 1–15. [CrossRef]
Baxter, 2010, “Rethink Robotics,” Rethink Robotics, accessed Jan. 29, 2018, http://www.rethinkrobotics.com/baxter
KUKA, 2010, “KUKA LBR IV,” KUKA Robotics Corporation, Shelby Charter Township, MI, accessed Jan. 29, 2018, https://www.kuka.com/en-us/products/robotics-systems/industrial-robots/lbr-iiwa
ABB, 2013, “ABB Friendly Robot for Industrial Dual Arm FRIDA,” ABB, accessed Jan. 29, 2018, http://new.abb.com/products/robotics/industrial-robots/yumi
Morato, C. , Kaipa, K. N. , Zhao, B. , and Gupta, S. K. , 2013, “Safe Human Robot Interaction by Using Exteroceptive Sensing Based Human Modeling,” ASME Paper No. DETC2013-13351.
Heiser, J. , Phan, D. , Agrawala, M. , Tversky, B. , and Hanrahan, P. , 2004, “Identification and Validation of Cognitive Design Principles for Automated Generation of Assembly Instructions,” Working Conference on Advanced Visual Interfaces (AVI), Gallipoli, Italy, May 25–28, pp. 311–319.
Dalal, M. , Feiner, S. , McKeown, K. , Pan, S. , Zhou, M. , Höllerer, T. , Shaw, J. , Feng, Y. , and Fromer, J. , 1996, “Negotiation for Automated Generation of Temporal Multimedia Presentations,” Fourth ACM International Conference on Multimedia (MULTIMEDIA), Boston, MA, Nov. 18–22, pp. 55–64.
Zimmerman, G. , Barnes, J. , and Leventhal, L. , 2003, “A Comparison of the Usability and Effectiveness of Web-Based Delivery of Instructions for Inherently-3D Construction Tasks on Handheld and Desktop Computers,” Eighth International Conference on 3D Web Technology (Web3D), Saint Malo, France, Mar. 9–12, pp. 49–54.
Kim, S. , Woo, I. , Maciejewski, R. , Ebert, D. S. , Ropp, T. D. , and Thomas, K. , 2010, “Evaluating the Effectiveness of Visualization Techniques for Schematic Diagrams in Maintenance Tasks,” Seventh Symposium on Applied Perception in Graphics and Visualization (APGV), Los Angeles, CA, July 23–24, pp. 33–40.
Kalkofen, D. , Tatzgern, M. , and Schmalstieg, D. , 2009, “Explosion Diagrams in Augmented Reality,” IEEE Virtual Reality Conference (VR), Lafayette, LA, Mar. 14–18, pp. 71–78.
Henderson, S. , and Feiner, S. , 2011, “Exploring the Benefits of Augmented Reality Documentation for Maintenance and Repair,” IEEE Trans. Visualization Comput. Graph., 17(10), pp. 1355–1368. [CrossRef]
Dionne, D. , de la Puente, S. , León, C. , Hervás, R. , and Gervás, P. , 2009, “A Model for Human Readable Instruction Generation Using Level-Based Discourse Planning and Dynamic Inference of Attributes Disambiguation,” 12th European Workshop on Natural Language Generation, Athens, Greece, Mar. 30–31, pp. 66–73. http://www.aclweb.org/anthology/W09-0610
Brough, J. E. , Schwartz, M. , Gupta, S. K. , Anand, D. K. , Kavetsky, R. , and Pettersen, R. , 2007, “Towards the Development of a Virtual Environment-Based Training System for Mechanical Assembly Operations,” Virtual Reality, 11(4), pp. 189–206. [CrossRef]
Gupta, S. K. , Anand, D. , Brough, J. E. , Kavetsky, R. , Schwartz, M. , and Thakur, A. , 2008, “A Survey of the Virtual Environments-Based Assembly Training Applications,” Virtual Manufacturing Workshop, Turin, Italy, pp. 1–10.
Ohbuchi, R. , Osada, K. , Furuya, T. , and Banno, T. , 2008, “Salient Local Visual Features for Shape-Based 3D Model Retrieval,” IEEE International Conference on Shape Modeling and Applications (SMI), Stony Brook, NY, June 4–6, pp. 93–102.
Chen, H. , and Bhanu, B. , 2007, “3D Free-Form Object Recognition in Range Images Using Local Surface Patches,” Pattern Recognit. Lett., 28(10), pp. 1252–1262. [CrossRef]
Liu, Y. , Zha, H. , and Qin, H. , 2006, “Shape Topics: A Compact Representation and New Algorithms For 3D Partial Shape Retrieval,” IEEE Computer Society Conference on Computer Vision and Pattern Recognition, New York, June 17–22, pp. 2025–2032.
Frome, A. , Huber, D. , Kolluri, R. , Bulow, T. , and Malik, J. , 2004, “Recognizing Objects in Range Data Using Regional Point Descriptors,” European Conference on Computer Vision (ECCV), Prague, Czech Republic, May 11–14, pp. 224–237. https://www2.eecs.berkeley.edu/Research/Projects/CS/vision/shape/frome-sc3d.pdf
Mian, A. , Bennamoun, M. , and Owens, R. , 2009, “On the Repeatability and Quality of Keypoints for Local Feature-Based 3D Object Retrieval From Cluttered Scenes,” Int. J. Comput. Vision, 89(2–3), pp. 348–361.
Mian, A. , Bennamoun, M. , and Owens, R. , 2009, “A Novel Representation and Feature Matching Algorithm for Automatic Pairwise Registration of Range Images,” Int. J. Comput. Vision, 66(1), pp. 19–40. [CrossRef]
Zhong, Y. , 2009, “Intrinsic Shape Signatures: A Shape Descriptor for 3D Object Recognition,” IEEE 12th International Conference on Computer Vision Workshops (ICCV), Kyoto, Japan, Sept. 27–Oct. 4, pp. 689–696.
Johnson, A. , and Hebert, M. , 1999, “Using Spin Images for Efficient Object Recognition in Cluttered 3D Scenes,” IEEE Trans. Pattern Anal. Mach. Intell., 21(5), pp. 433–449. [CrossRef]
Chua, C. , and Jarvis, R. , 1997, “Point Signatures: A New Representation for 3D Object Recognition,” Int. J. Comput. Vision, 25(1), pp. 63–85. [CrossRef]
Stein, F. , and Medioni, G. , 1992, “Structural Indexing: Efficient 3-D Object Recognition,” IEEE Trans. Pattern Anal. Mach. Intell., 14(2), pp. 125–145. [CrossRef]
Hetzel, G. , Leibe, B. , Levi, P. , and Schiele, B. , 2001, “3D Object Recognition From Range Images Using Local Feature Histograms,” IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), Kauai, HI, Dec. 8–14, pp. II-394–II-399.
Tangelder, J. , and Veltkamp, R. , 2004, “A Survey of Content Based 3D Shape Retrieval Methods,” IEEE International Conference on Shape Modeling Applications, Genova, Italy, June 7–9, pp. 145–156.
Freedman, A. , Shpunt, B. , Machline, M. , and Arieli, Y. , 2008, “Depth Mapping Using Projected Patterns,” Prime Sense Ltd., Israel, Patent No. WO 2008/120217 A2. https://www.google.com/patents/WO2008120217A2?cl=en
Gupta, S. K. , Regli, W. C. , Das, D. , and Nau, D. S. , 1997, “Automated Manufacturability Analysis: A Survey,” Res. Eng. Des., 9(3), pp. 68–190. [CrossRef]
Gupta, S. K. , Paredis, C. , Sinha, R. , Wang, C. , and Brown, P. F. , 1998, “An Intelligent Environment for Simulating Mechanical Assembly Operation,” ASME Design Engineering Technical Conferences (DETC), Atlanta, GA, Sept. 13–16, pp. 1–12. https://www.ri.cmu.edu/pub_files/pub2/gupta_satyandra_1998_1/gupta_satyandra_1998_1.pdf
Gupta, S. K. , Paredis, C. , Sinha, R. , and Brown, P. F. , 2001, “Intelligent Assembly Modeling and Simulation,” Assem. Autom., 21(3), pp. 215–235. [CrossRef]
Morato, C. , Kaipa, K. N. , and Gupta, S. K. , 2012, “Assembly Sequence Planning by Using Multiple Random Trees Based Motion Planning,” ASME Paper No. DETC2012-71243.
Morato, C. , Kaipa, K. N. , and Gupta, S. K. , 2013, “Improving Assembly Precedence Constraint Generation by Utilizing Motion Planning and Part Interaction Clusters,” J. Comput.-Aided Des., 45(11), pp. 1349–1364. [CrossRef]
Kaipa, K. N. , Morato, C. , Zhao, B. , and Gupta, S. K. , 2012, “Instruction Generation for Assembly Operations Performed by Humans,” ASME Paper No. DETC2012-71266.
Cardone, A. , Gupta, S. K. , and Karnik, M. , 2003, “A Survey of Shape Similarity Assessment Algorithms for Product Design and Manufacturing Applications,” ASME J. Comput. Inf. Sci. Eng., 3(2), pp. 109–118. [CrossRef]
Cardone, A. , and Gupta, S. K. , 2006, “Similarity Assessment Based on Face Alignment Using Attributed Applied Vectors,” Comput.-Aided Des. Appl., 3(5), pp. 645–654.
Petitjean, S. , 2002, “A Survey of Methods for Recovering Quadrics in Triangle Meshes,” ACM Comput. Surv., 34(2), pp. 211–262. [CrossRef]
Newcombe, R. , and Davison, A. , 2010, “Live Dense Reconstruction With a Single Moving Camera,” IEEE Conference on Computer Vision and Pattern Recognition (CVPR), San Francisco, CA, June 13–18, pp. 1498–1505.
Newcombe, R. , Lovegrove, S. , and Davison, A. , 2011, “DTAM: Dense Tracking and Mapping in Real-Time,” International Conference on Computer Vision (ICCV), Barcelona, Spain, Nov. 6–13, pp. 2320–2327.
Newcombe, R. , Izadi, S. , Hilliges, O. , Molyneaux, D. , Kim, D. , Davison, A. , Pushmeet, K. , Shoton, J. , Hodges, S. , and Fitzgibbon, A. , 2011, “Kinectfusion: Real-Time Dense Surface Mapping and Tracking,” Tenth IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Basel, Switzerland, Oct. 26–29, pp. 127–136.
Izadi, S. , Kim, D. , Hilliges, O. , Newcombe, R. , Molyneaux, D. , Newcombe, R. , Kohli, P. , Shoton, J. , Hodges, S. , Freeman, D. , Davison, A. , and Fitzgibbon, A. , 2011, “Kinectfusion: Real-Time 3D Reconstruction and Interaction Using a Moving Depth Camera,” 24th Annual ACM Symposium on User Interface Software and Technology (UIST), Santa Barbara, CA, Oct. 16–19, pp. 559–568.
Toldo, R. , Beinat, A. , and Crosilla, F. , 2010, “Global Registration of Multiple Point Clouds Embedding the Generalized Procrustes Analysis Into an ICP Framework,” International Conference on 3D Data Processing, Visualization, and Transmission (DPVT), Paris, France, May 17–20, pp. 1–8. https://www.researchgate.net/publication/228959196_Global_registration_of_multiple_point_clouds_embedding_the_Generalized_Procrustes_Analysis_into_an_ICP_framework
Goodall, C. , 1991, “Procrustes Methods in the Statistical Analysis of Shape,” J. R. Stat. Soc. Ser. B, 53(2), pp. 285–339. http://www.jstor.org/stable/2345744
Krishnan, S. , Lee, P. , Moore, J. , and Venkatasubramanian, S. , 2005, “Global Registration of Multiple 3D Point Sets Via Optimization-on-a-Manifold,” Third Eurographics Symposium on Geometry Processing (SGP), Vienna, Austria, July 4–6, pp. 1–11. https://dl.acm.org/citation.cfm?id=1281952
Rusinkiewicz, S. , and Levoy, M. , 2001, “Efficient Variants of the ICP Algorithm,” IEEE Third International Conference on 3D Digital Imaging and Modeling, Quebec City, QC, Canada, May 28–June 1, pp. 145–152.
Wedin, P. A. , and Viklands, T. , 2006, “Algorithms for 3-Dimensional Weighted Orthogonal Procrustes Problems,” Umea University, Umeå, Sweden, Technical Report No. UMINF-06.06. http://www8.cs.umu.se/~viklands/PhDpaper1.pdf
Davies, S. , 2007, “Watching Out for the Workers [Safety Workstations],” IET Manuf., 86(4), pp. 32–34.
Andrieu, C. , and Doucet, A. , 2011, “Robots and Robotic Devices: Safety Requirements or Industrial Robots—Part 1: Robot,” International Organization for Standardization, Geneva, Switzerland, Standard No. ISO 10218-1:2011. https://www.iso.org/standard/51330.html
ISO, 2011, “Robots and Robotic Devices: Safety Requirements for Industrial Robots—Part 2: Robot Systems and Integration,” International Organization for Standardization, Geneva, Switzerland, Standard No. ISO/FDIS 10218-2:2011. https://www.iso.org/standard/73934.html
Banerjee, A. G. , Barnes, A. , Kaipa, K. N. , Liu, J. , Shriyam, S. , Shah, N. , and Gupta, S. K. , 2015, “An Ontology to Enable Optimized Task Partitioning in Human-Robot Collaboration for Warehouse Kitting Operations,” Proc. SPIE, 9494, p. 94940H.
Kaipa, K. N. , Kankanhalli-Nagendra, A. S. , Kumbla, N. B. , Shriyam, S. , Thevendria-Karthic, S. S. , Marvel, J. A. , and Gupta, S. K. , 2016, “Addressing Perception Uncertainty Induced Failure Modes in Robotic Bin-Picking,” Rob. Comput.-Integr. Manuf., 42, pp. 17–38. [CrossRef]
Kaipa, K. N. , Shriyam, S. , Kumbla, N. B. , and Gupta, S. K. , 2016, “Resolving Occlusions Through Simple Extraction Motions in Robotic Bin-Picking,” ASME Paper No. MSEC2016-8661.

Figures

Grahic Jump Location
Fig. 1

Hybrid cell in which a human and a robot collaborate to assemble a product

Grahic Jump Location
Fig. 2

(a) Assembly computer-aided design (CAD) parts from a simplified jet engine, (b) a simple jet engine assembly, and (c) feasible assembly sequence generated by the algorithm

Grahic Jump Location
Fig. 3

Generation of instructions for chassis assembly (1–6)

Grahic Jump Location
Fig. 4

Three-dimensional part tracking block diagram

Grahic Jump Location
Fig. 5

The state-state discrete monitoring system has two control points: (a) Initial location: parts are located out of the robot workspace in a random configuration. Human pic the parts one by one. (b) Intermediate location: human place the parts at the robot workspace in a specific configuration. (c) Robot successfully picking up the part from the assembly table and perform the task.

Grahic Jump Location
Fig. 6

First compressor identified in a subset of similar parts: cluster 1 (rear bearing), cluster 2 (first compressor), cluster 3 (second compressor) and cluster 4 (third compressor), and cluster 5 (exhaust turbine)

Grahic Jump Location
Fig. 7

Performance characterization: region close to the intersection between processing time and MSE, and below the threshold represents the “sweet spot”

Grahic Jump Location
Fig. 8

Assembly operations: (a) human picks up the part, (b) in order to allow synchronization, the system recognizes the part, (c) human moves the part to the intermediate location, and (d) human places the part in the intermediate location

Grahic Jump Location
Fig. 9

(a) Human picks a part (compressor); appropriate text annotations are generated as a feedback to the human. (b) Part selected is different from the assembly sequence; after a real-time evaluation, the system does not accept the modification in the assembly plan. (c) Human returns the part to location 1. (d) Human picks a part (exhaust turbine), after real-time evaluation the part is accepted. (e) Human places the part into the robot's workspace. (f) The robot motion planning is executed for the exhaust turbine. If the assembly plan is modified (replanning), the robot uses the altered motion plan to pick the part and place it in its target position in the assembly.

Tables

Errata

Some tools below are only available to our subscribers or users with an online account.

Related Content

Customize your page view by dragging and repositioning the boxes below.

Related Journal Articles
Related eBook Content
Topic Collections

Sorry! You do not have access to this content. For assistance or to subscribe, please contact us:

  • TELEPHONE: 1-800-843-2763 (Toll-free in the USA)
  • EMAIL: asmedigitalcollection@asme.org
Sign In