Review Article

J. Comput. Inf. Sci. Eng. 2017;18(1):010801-010801-7. doi:10.1115/1.4038291.

An anthropomorphic, under-actuated, prosthetic hand has been designed and developed for upper extremity amputees. This paper proposes a dexterity focused approach to the design of an anthropomorphic electromechanical hand for transradial amputees. Dexterity is increased by the improvement of thumb position, orientation, and work space. The fingers of the hand are also capable of adduction and abduction. It is the intent of this research project to aid the rehabilitation of upper extremity amputees by increasing the amount of tasks the hand can execute. Function and control of the hand are based on micro servo actuation and information acquired from the brain. Electroencephalography (EEG) is used to attain the mental state of the user, which triggers the prosthetic hand. This paper focuses on the mechanical arrangement of the hand and investigates the effect of increasing the degrees-of-freedom (DOFs) the thumb and fingers have.

Commentary by Dr. Valentin Fuster

Research Papers

J. Comput. Inf. Sci. Eng. 2017;18(1):011001-011001-12. doi:10.1115/1.4037934.

The design of complex engineering systems requires that the problem is decomposed into subproblems of manageable size. From the perspective of decision-based design (DBD), typically this results in a set of hierarchical decisions. It is critically important for computational frameworks for engineering system design to be able to capture and document this hierarchical decision-making knowledge for reuse. Ontology is a formal knowledge modeling scheme that provides a means to structure engineering knowledge in a retrievable, computer-interpretable, and reusable manner. In our earlier work, we have created ontologies to represent individual design decisions (selection and compromise). Here, we extend the selection and compromise decision ontologies to an ontology for hierarchical decisions. This can be used to represent workflows with multiple decisions coupling together. The core of the proposed ontology includes the coupled decision support problem (DSP) construct, and two key classes, namely, Process that represents the basic hierarchy building blocks wherein the DSPs are embedded, and Interface to represent the DSP information flows that link different Processes to a hierarchy. The efficacy of the ontology is demonstrated using a portal frame design example. Advantages of this ontology are that it is decomposable and flexible enough to accommodate the dynamic evolution of a process along the design timeline.

Commentary by Dr. Valentin Fuster
J. Comput. Inf. Sci. Eng. 2017;18(1):011002-011002-10. doi:10.1115/1.4037434.

An important part of the engineering design process is prototyping, where designers build and test their designs. This process is typically iterative, time consuming, and manual in nature. For a given task, there are multiple objects that can be used, each with different time units associated with accomplishing the task. Current methods for reducing time spent during the prototyping process have focused primarily on optimizing designer to designer interactions, as opposed to designer to tool interactions. Advancements in commercially available sensing systems (e.g., the Kinect) and machine learning algorithms have opened the pathway toward real-time observation of designer's behavior in engineering workspaces during prototype construction. Toward this end, this work hypothesizes that an object O being used for task i is distinguishable from object O being used for task j, where i is the correct task and j is the incorrect task. The contributions of this work are: (i) the ability to recognize these objects in a free roaming engineering workshop environment and (ii) the ability to distinguish between the correct and incorrect use of objects used during a prototyping task. By distinguishing the difference between correct and incorrect uses, incorrect behavior (which often results in wasted time and materials) can be detected and quickly corrected. The method presented in this work learns as designers use objects, and infers the proper way to use them during prototyping. In order to demonstrate the effectiveness of the proposed method, a case study is presented in which participants in an engineering design workshop are asked to perform correct and incorrect tasks with a tool. The participants' movements are analyzed by an unsupervised clustering algorithm to determine if there is a statistical difference between tasks being performed correctly and incorrectly. Clusters which are a plurality incorrect are found to be significantly distinct for each node considered by the method, each with p ≪ 0.001.

Commentary by Dr. Valentin Fuster
J. Comput. Inf. Sci. Eng. 2017;18(1):011003-011003-13. doi:10.1115/1.4038158.

In complex engineering systems, complexity may arise by design, or as a by-product of the system's operation. In either case, the cause of complexity is the same: the unpredictable manner in which interactions among components modify system behavior. Traditionally, two different approaches are used to handle such complexity: (i) a centralized design approach where the impacts of all potential system states and behaviors resulting from design decisions must be accurately modeled and (ii) an approach based on externally legislating design decisions, which avoid such difficulties, but at the cost of expensive external mechanisms to determine trade-offs among competing design decisions. Our approach is a hybrid of the two approaches, providing a method in which decisions can be reconciled without the need for either detailed interaction models or external mechanisms. A key insight of this approach is that complex system design, undertaken with respect to a variety of design objectives, is fundamentally similar to the multi-agent coordination problem, where component decisions and their interactions lead to global behavior. The results of this paper demonstrate that a team of autonomous agents using a cooperative coevolutionary algorithm (CCEA) can effectively design a complex engineered system. This paper uses a system model of a Formula SAE racing vehicle to illustrate and simulate the methods and potential results. By designing complex systems with a multi-agent coordination approach, a design methodology can be developed to reduce design uncertainty and provide mechanisms through which the system level impact of decisions can be estimated without explicitly modeling such interactions.

Commentary by Dr. Valentin Fuster
J. Comput. Inf. Sci. Eng. 2017;18(1):011004-011004-10. doi:10.1115/1.4037435.

Usage context is considered a critical driving factor for customers' product choices. In addition, physical use of a product (i.e., user-product interaction) dictates a number of customer perceptions (e.g., level of comfort). In the emerging internet of things (IoT), this work hypothesizes that it is possible to understand product usage and level of comfort while it is “in-use” by capturing the user-product interaction data. Mining this data to understand both the usage context and the comfort of the user adds new capabilities to product design. There has been tremendous progress in the field of data analytics, but the application in product design is still nascent. In this work, application of feature-learning methods for the identification of product usage context and level of comfort is demonstrated, where usage context is limited to the activity of the user. A novel generic architecture using foundations in convolutional neural network (CNN) is developed and applied to a walking activity classification using smartphone accelerometer data. Results are compared with feature-based machine learning algorithms (neural network and support vector machines (SVM)) and demonstrate the benefits of using the feature-learning methods over the feature-based machine-learning algorithms. To demonstrate the generic nature of the architecture, an application toward comfort level prediction is presented using force sensor data from a sensor-integrated shoe.

Commentary by Dr. Valentin Fuster
J. Comput. Inf. Sci. Eng. 2017;18(1):011005-011005-12. doi:10.1115/1.4038292.

In this paper, we present multiple methods to detect fasteners (bolts, screws, and nuts) from tessellated mechanical assembly models. There is a need to detect these geometries in tessellated formats because of features that are lost during the conversions from other geometry representations to tessellation. Two geometry-based algorithms, projected thread detector (PTD) and helix detector (HD), and four machine learning classifiers, voted perceptron (VP), Naïve Bayes (NB), linear discriminant analysis, and Gaussian process (GP), are implemented to detect fasteners. These six methods are compared and contrasted to arrive at an understanding of how to best perform this detection in practice on large assemblies. Furthermore, the degree of certainty of the automatic detection is also developed and examined so that a user may be queried when the automatic detection leads to a low certainty in the classification. This certainty measure is developed with three probabilistic classifier approaches and one fuzzy logic-based method. Finally, once the fasteners are detected, the authors show how the thread angle, the number of threads, the length, and major and root diameters can be determined. All of the mentioned methods are implemented and compared in this paper. A proposed combination of methods leads to an accurate and robust approach of performing fastener detection.

Commentary by Dr. Valentin Fuster
J. Comput. Inf. Sci. Eng. 2017;18(1):011006-011006-12. doi:10.1115/1.4038144.

This paper is arranged in three main sections: the first section is a hierarchical method based on clustering and a fuzzy membership system where the tessellated three-dimensional (3D) models are classified into their containing primitives: cylinder, cone, sphere, and flat. In the second section, automated assembly planning (AAP) is considered as the main application of our novel hierarchical primitive classification approach. The classified primitives obtained from the first section are used to define the removal directions between mating parts in an assembly model. Finally, a fuzzification method is used to express the uncertainty of the detected connections between every pair of parts. The acquired uncertainties are used in a user interaction process to approve, deny, or modify the connections with higher uncertainties.

Commentary by Dr. Valentin Fuster
J. Comput. Inf. Sci. Eng. 2018;18(1):011007-011007-13. doi:10.1115/1.4038315.

In this paper, an effective strategy is proposed to realize the smooth visualization of large-scale finite element models on a desktop computer. Based on multicore parallel and graphics processing unit (GPU) computing techniques, the large-scale data of a finite element model and the corresponding graphics data can be handled and rendered effectively. The proposed strategies mainly consist of four parts. First, a parallel surface extraction technology based on the dual connections of elements and nodes is developed to reduce the graphics data. Second, the OpenGL vertex buffer object (VBO) technology is used to improve the rendering efficiency after surface extraction. Third, the element-hiding and cut-surface functions are implemented to facilitate the observation of the interior of the meshes. Finally, the stream/filter architecture, which has the advantages of efficient computation and communication, is introduced to meet the needs of large-scale data processing and various visualization methods. These strategies are developed on the general visualization system SiPESC.Post. Using these strategies, SiPESC.Post implements high-performance display and real-time operation for large-scale finite element models, especially for models containing millions or tens of millions of elements. To demonstrate the superiority and feasibility of the presented strategies, large-scale numerical examples are presented, and the strategies are compared with several commercial finite element software systems and open-source visual postprocessing packages in terms of visualization efficiency.

Commentary by Dr. Valentin Fuster
J. Comput. Inf. Sci. Eng. 2018;18(1):011008-011008-8. doi:10.1115/1.4038968.

Shape matching using their critical feature points is useful in mechanical processes such as precision measure of manufactured parts and automatic assembly of parts. In this paper, we present a practical algorithm for measuring the similarity of two point sets A and B: Given an allowable tolerance ε, our target is to determine the feasibility of placing A with respect to B such that the maximum of the minimum distance from each point of A to its corresponding matched point in B is no larger than ε. For sparse and small point sets, an improved algorithm is achieved based on a sparse grid, which is used as an auxiliary structure for building the correspondence relationship between A and B. For large point sets, allowing a trade-off between efficiency and accuracy, we approximate the problem as computing the directed Hausdorff distance from A to B, and provide a two-phase nested Monte Carlo method for solving the problem. Experimental results are presented to validate the proposed algorithms.

Commentary by Dr. Valentin Fuster

Sorry! You do not have access to this content. For assistance or to subscribe, please contact us:

  • TELEPHONE: 1-800-843-2763 (Toll-free in the USA)
  • EMAIL: asmedigitalcollection@asme.org
Sign In