0
Research Papers

Dynamic Rendering of Remote Indoor Environments Using Real-Time Point Cloud Data

[+] Author and Article Information
Kevin Lesniak

Industrial and Manufacturing Engineering,
The Pennsylvania State University,
University Park, PA 16802
e-mail: kal5544@psu.edu

Conrad S. Tucker

Engineering Design, Industrial and
Manufacturing Engineering,
The Pennsylvania State University,
University Park, PA 16802
e-mail: ctucker4@psu.edu

Contributed by the Computers and Information Division of ASME for publication in the JOURNAL OF COMPUTING AND INFORMATION SCIENCE IN ENGINEERING. Manuscript received October 15, 2017; final manuscript received February 25, 2018; published online June 12, 2018. Assoc. Editor: Jitesh H. Panchal.

J. Comput. Inf. Sci. Eng 18(3), 031006 (Jun 12, 2018) (11 pages) Paper No: JCISE-17-1230; doi: 10.1115/1.4039472 History: Received October 15, 2017; Revised February 25, 2018

Modern color and depth (RGB-D) sensing systems are capable of reconstructing convincing virtual representations of real world environments. These virtual reconstructions can be used as the foundation for virtual reality (VR) and augmented reality environments due to their high-quality visualizations. However, a main limitation of modern virtual reconstruction methods is the time it takes to incorporate new data and update the virtual reconstruction. This delay prevents the reconstruction from accurately rendering dynamic objects or portions of the environment (like an engineer performing an inspection of a machinery or laboratory space). The authors propose a multisensor method to dynamically capture objects in an indoor environment. The method automatically aligns the sensors using modern image homography techniques, leverages graphics processing units (GPUs) to process the large number of independent RGB-D data points, and renders them in real time. Incorporating and aligning multiple sensors allows a larger area to be captured from multiple angles, providing a more complete virtual representation of the physical space. Performing processing on GPU's leverages the large number of processing cores available to minimize the delay between data capture and rendering. A case study using commodity RGB-D sensors, computing hardware, and standard transmission control protocol internet connections is presented to demonstrate the viability of the proposed method.

FIGURES IN THIS ARTICLE
<>
Copyright © 2018 by ASME
Your Session has timed out. Please sign back in to continue.

References

Cutting, J. E. , 1997, “How the Eye Measures Reality and Virtual Reality,” Behav. Res. Methods, Instrum., Comput., 29(1), pp. 27–36. [CrossRef]
Bowman, D. A. , and McMahan, R. P. , 2007, “Virtual Reality: How Much Immersion is Enough?,” Computer, 40(7), pp. 36–43. [CrossRef]
Lee, K. M. , 2004, “Why Presence Occurs: Evolutionary Psychology, Media Equation, and Presence,” Presence: Teleoperators Virtual Environ., 13(4), pp. 494–505. [CrossRef]
Turner, E. , Cheng, P. , and Zakhor, A. , 2015, “Fast, Automated, Scalable Generation of Textured 3D Models of Indoor Environments,” IEEE J. Sel. Top. Signal Process., 9(3), pp. 409–421. [CrossRef]
Hamzeh, O. , and Elnagar, A. , 2015, “A Kinect-Based Indoor Mobile Robot Localization,” Tenth International Symposium on Mechatronics and Its Applications (ISMA), Sharjah, United Arab Emirates, Dec. 8–10, pp. 1–6.
Newcombe, R. A. , Izadi, S. , Hilliges, O. , Molyneaux, D. , Kim, D. , Davison, A. , Kohi, P. , Shotton, J. , Hodges, S. , and Fitzgibbon, A. , 2011, “KinectFusion: Real-Time Dense Surface Mapping and Tracking,” Tenth IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Basel, Switzerland, Oct. 26–29, pp. 127–136.
Microsoft, 2011, “Kinect Fusion Explorer-WPF C# Sample,” Microsoft Inc., Redmond, WA, accessed Feb. 16, 2017, https://msdn.microsoft.com/en-us/library/dn193975.aspx
Lesniak, K. , Terpenny, J. , Tucker, C. S. , Anumba, C. , and Bilén, S. G. , 2016, “Immersive Distributed Design Through Real-Time Capture, Translation, and Rendering of Three-Dimensional Mesh Data,” ASME J. Comput. Inf. Sci. Eng., 17(3), p. 031010. [CrossRef]
Oculus VR, LLC, 2016, “Oculus Rift|Oculus,” Oculus VR, LLC, Menlo Park, CA, accessed Feb. 16, 2017, https://www.oculus.com/rift/#oui-csl-rift-games=mages-tale
Ookla, 2016, “United States Speedtest Market Report,” Ookla, Kalispell, MT, accessed Feb. 10, 2017, http://www.speedtest.net/reports/united-states/
Yang, R. S. , Chan, Y. H. , Gong, R. , Nguyen, M. , Strozzi, A. G. , Delmas, P. , Gimel'farb, G. , and Ababou, R. , 2013, “Multi-Kinect Scene Reconstruction: Calibration and Depth Inconsistencies,” 28th International Conference on Image and Vision Computing New Zealand (IVCNZ), Wellington, New Zealand, Nov. 27–29, pp. 47–52.
Asteriadis, S. , Chatzitofis, A. , Zarpalas, D. , Alexiadis, D. S. , and Daras, P. , 2013, “Estimating Human Motion From Multiple Kinect Sensors,” Sixth International Conference on Computer Vision/Computer Graphics Collaboration Techniques and Applications (MIRAGE), Berlin, June 6–7, Paper No. 3.
Harris, C. , and Stephens, M. , 1988, “A Combined Corner and Edge Detector,” Alvey Vision Conference, Manchester, UK, Aug. 31–Sept. 2, Paper No. 50. http://citeseer.ist.psu.edu/viewdoc/download;jsessionid=8475C77EC4C2AD0EDC7C61C14D189E33?doi=10.1.1.231.1604&rep=rep1&type=pdf
Dubrofsky, E. , 2009, “Homography Estimation,” Master's thesis, Univerzita Britské Kolumbie, Vancouver, BC, Canada.
C. R. Souza, 2014, “Accord.NET Framework,” São Carlos, Brazil, accessed Oct. 3, 2017, http://accord-framework.net/
Ni, D. , Song, A. , Xu, X. , Li, H. , Zhu, C. , and Zeng, H. , 2017, “3D-Point-Cloud Registration and Real-World Dynamic Modelling-Based Virtual Environment Building Method for Teleoperation,” Robotica, 35(10), pp. 1958–1974. [CrossRef]
Kim, S. , and Park, J. , 2017, “Robust Haptic Exploration of Remote Environments Represented by Streamed Point Cloud Data,” IEEE World Haptics Conference (WHC), Munich, Germany, June 6–9, pp. 358–363.
Su, P.-C. , Xu, W. , Shen, J. , and Cheung, S. S. , 2017, “Real-Time Rendering of Physical Scene on Virtual Curved Mirror With RGB-D Camera Networks,” IEEE International Conference on Multimedia & Expo Workshops (ICMEW), Hong Kong, China, July 10–14, pp. 79–84.
Garrett, T. , Debernardis, S. , Oliver, J. , and Radkowski, R. , 2016, “Poisson Mesh Reconstruction for Accurate Object Tracking With Low-Fidelity Point Clouds,” ASME J. Comput. Inf. Sci. Eng., 17(1), p. 011003. [CrossRef]
Epic Games, 2014, “What is Unreal Engine 4,” Epic Games, Inc., Cary, NC, accessed Feb. 16, 2017, https://www.unrealengine.com/what-is-unreal-engine-4
Sharples, S. , Cobb, S. , Moody, A. , and Wilson, J. R. , 2008, “Virtual Reality Induced Symptoms and Effects (VRISE): Comparison of Head Mounted Display (HMD), Desktop and Projection Display Systems,” Displays, 29(2), pp. 58–69. [CrossRef]
Microsoft, 2010, “GZipStream Class (System.IO.Compression),” Microsoft Inc., Redmond, WA, accessed Feb. 16, 2017, https://msdn.microsoft.com/en-us/library/system.io.compression.gzipstream(v=vs.110).aspx

Figures

Grahic Jump Location
Fig. 1

KinectFusionExplorer missing dynamic data

Grahic Jump Location
Fig. 3

Two mis-aligned datasets

Grahic Jump Location
Fig. 4

Image downsampling

Grahic Jump Location
Fig. 5

Two-sensor alignment

Grahic Jump Location
Fig. 7

Resource usage of depth and color data compared against 480p and 1080p images

Tables

Errata

Discussions

Some tools below are only available to our subscribers or users with an online account.

Related Content

Customize your page view by dragging and repositioning the boxes below.

Related Journal Articles
Related eBook Content
Topic Collections

Sorry! You do not have access to this content. For assistance or to subscribe, please contact us:

  • TELEPHONE: 1-800-843-2763 (Toll-free in the USA)
  • EMAIL: asmedigitalcollection@asme.org
Sign In