0
Research Papers

Massively Parallel Discrete Element Method Simulations on Graphics Processing Units

[+] Author and Article Information
John Steuben

Computational Multiphysics Systems Laboratory,
U.S. Naval Research Laboratory,
Washington, DC 20375
e-mail: john.steuben.ctr@nrl.navy.mil

Graham Mustoe

Professor
College of Engineering and Computer Science,
Colorado School of Mines,
Golden, CO 80401
e-mail: gmustoe@mines.edu

Cameron Turner

Associate Professor
Department of Mechanical Engineering,
Clemson University,
Clemson, SC 29634
e-mail: cturne9@clemson.edu

1Corresponding author.

Contributed by the Computers and Information Division of ASME for publication in the JOURNAL OF COMPUTING AND INFORMATION SCIENCE IN ENGINEERING. Manuscript received June 18, 2014; final manuscript received May 23, 2016; published online August 19, 2016. Editor: Bahram Ravani.

J. Comput. Inf. Sci. Eng 16(3), 031001 (Aug 19, 2016) (8 pages) Paper No: JCISE-14-1212; doi: 10.1115/1.4033724 History: Received June 18, 2014; Revised May 23, 2016

This paper outlines the development and implementation of large-scale discrete element method (DEM) simulations on graphics processing hardware. These simulations, as well as the topic of general-purpose graphics processing unit (GPGPU) computing, are introduced and discussed. We proceed to cover the general software design choices and architecture used to realize a GPGPU-enabled DEM simulation, driven primarily by the massively parallel nature of this computing technology. Further enhancements to this simulation, namely, a more advanced sliding friction model and a thermal conduction model, are then addressed. This discussion also highlights some of the finer points and issues associated with GPGPU computing, particularly surrounding the issues of parallelization, synchronization, and approximation. Qualitative comparison studies between simple and advanced sliding friction models demonstrate the effectiveness of the friction model. A test problem and an application problem in the area of wind turbine blade icing demonstrate the capabilities of the thermal model. We conclude with remarks regarding the simulations developed, future work needed, and the general suitability of GPGPU architectures for DEM computations.

FIGURES IN THIS ARTICLE
<>
Copyright © 2016 by ASME
Your Session has timed out. Please sign back in to continue.

References

Figures

Grahic Jump Location
Fig. 1

Elastic interaction of particles in a DEM simulation. Arrows indicate particle velocity, and the shaded areas where particles overlap indicate a repulsive force between particles due to elastic deformation.

Grahic Jump Location
Fig. 2

Flow diagram of the opencl DEM simulation. Note the three distinct phases.

Grahic Jump Location
Fig. 3

Local coordinate system (left) and friction force Fs versus slip distance δs (right)

Grahic Jump Location
Fig. 4

Results of the first trial run. The frames show the simulation at t = 10, 30, 90, 120, 160, and 180 s since simulation start.

Grahic Jump Location
Fig. 5

Results of the proof-of-concept test run. Note the temperature scale at right.

Grahic Jump Location
Fig. 6

Turbine blade geometry

Grahic Jump Location
Fig. 7

Results of GPGPU-enabled thermal DEM. Particle flow is from left to right. The formation of a wake (left) is seen on the leading edge of the turbine blade (wire frame). The thermal gradient in the boundary layer is seen in the cross section view (right).

Tables

Errata

Discussions

Some tools below are only available to our subscribers or users with an online account.

Related Content

Customize your page view by dragging and repositioning the boxes below.

Related Journal Articles
Related eBook Content
Topic Collections

Sorry! You do not have access to this content. For assistance or to subscribe, please contact us:

  • TELEPHONE: 1-800-843-2763 (Toll-free in the USA)
  • EMAIL: asmedigitalcollection@asme.org
Sign In