0

IN THIS ISSUE

Newest Issue


Research Papers

J. Comput. Inf. Sci. Eng. 2018;18(2):021001-021001-18. doi:10.1115/1.4037227.

This paper presents a new method for extracting feature edges from computer-aided design (CAD)-generated triangulations. The major advantage of this method is that it tends to extract feature edges along the centroids of the fillets rather than along the edges where fillets are connected to nonfillet surfaces. Typical industrial models include very small-radius fillets between relatively large surfaces. While some of those fillets are necessary for certain types of analyses, many of them are irrelevant for many other types of applications. Narrow fillets are unnecessary details for those applications and cause numerous problems in the downstream processes. One solution to the small-radius fillet problem is to divide the fillets along the centroid and then merge each fragment of the fillet with nonfillet surfaces. The proposed method can find such fillet centroids and can substantially reduce the adverse effects of such small-radius fillets. The method takes a triangulated geometry as input and first simplifies the model so that small-radius, or “small,” fillets are collapsed into line segments. The simplification is based on the normal errors and therefore is scale-independent. It is particularly effective for a shape that is a mix of small and large features. Then, the method creates segmentation in the simplified geometry, which is then transformed back to the original shape while maintaining the segmentation information. The groups of triangles are expanded by applying a region-growing technique to cover all triangles. The feature edges are finally extracted along the boundaries between the groups of triangles.

Commentary by Dr. Valentin Fuster
J. Comput. Inf. Sci. Eng. 2018;18(2):021002-021002-9. doi:10.1115/1.4039380.

A datum selection strategy based on statistical learning is proposed. The datum selection is an important part of tolerance specification which is the base of geometric tolerance selection and tolerance principle selection. The problem of datum selection is to deduce the datum reference frame (DRF) based on geometrical, contact, and positioning characteristics. Currently, heuristic rules are used for DRF selection, leading to suboptimal choice of DRF in many cases. The proposed strategy formulates normalized vectors computed from the geometric, contact, and positioning characteristics of surfaces. The surfaces of different parts can be compared by their normalized vectors. Then the statistical learning method is used for building a classifier which can discriminate datum feature vectors based on training samples. Finally, a case study is given to verify the strategy and the different algorithms are compared and discussed.

Commentary by Dr. Valentin Fuster
J. Comput. Inf. Sci. Eng. 2018;18(2):021003-021003-12. doi:10.1115/1.4039334.

In this paper, the use of methods from the meta- or surrogate modeling literature, for building models predicting the draping of physical surfaces, is examined. An example application concerning modeling of the behavior of a variable shape mold is treated. Four different methods are considered for this problem. The proposed methods are difference methods assembled from the methods kriging and proper orthogonal decomposition (POD) together with a spline-based underlying model (UM) and a novel patchwise modeling scheme. The four models, namely kriging and POD with kriging of the coefficients in global and local variants, are compared in terms of accuracy and numerical efficiency on data sets of different sizes for the treated application. It is shown that the POD-based methods are vastly superior to models based on kriging alone, and that the use of a difference model structure is advantageous. It is demonstrated that patchwise modeling schemes, where the complete surface behavior is modeled by a collection of locally defined smaller models, can provide a good compromise between achieving good model accuracy and scalability of the models to large systems.

Commentary by Dr. Valentin Fuster
J. Comput. Inf. Sci. Eng. 2018;18(2):021004-021004-10. doi:10.1115/1.4038954.

Robotic bin picking requires using a perception system to estimate the posture of parts in the bin. The selected singulation plan should be robust with respect to perception uncertainties. If the estimated posture is significantly different from the actual posture, then the singulation plan may fail during execution. In such cases, the singulation process will need to be repeated. We are interested in selecting singulation plans that minimize the expected task completion time. In order to estimate the expected task completion time for a proposed singulation plan, we need to estimate the probability of success and the plan execution time. Robotic bin picking needs to be done in real-time. Therefore, candidate singulation plans need to be generated and evaluated in real-time. This paper presents an approach for utilizing computationally efficient simulations for generating singulation plans. Results from physical experiments match well with the predictions obtained from simulations.

Commentary by Dr. Valentin Fuster
J. Comput. Inf. Sci. Eng. 2018;18(2):021005-021005-10. doi:10.1115/1.4039429.
FREE TO VIEW

With the advent of the fourth industrial revolution (Industry 4.0), manufacturing systems are transformed into digital ecosystems. In this transformation, the internet of things (IoT) and other emerging technologies pose a major role. To shift manufacturing companies toward IoT, smart sensor systems are required to connect their resources into the digital world. To address this issue, the proposed work presents a monitoring system for shop-floor control following the IoT paradigm. The proposed monitoring system consists of a data acquisition device (DAQ) capable of capturing quickly and efficiently the data from the machine tools, and transmits these data to a cloud gateway via a wireless sensor topology. The monitored data are transferred to a cloud server for further processing and visualization. The data transmission is performed in two levels, i.e., locally in the shop-floor using a star wireless sensor network (WSN) topology with a microcomputer gateway and from the microcomputer to Cloud using Internet protocols. The developed system follows the loT paradigm in terms of connecting the physical with the cyber world and offering integration capabilities with existing industrial systems. In addition, the open platform communication—unified architecture (OPC-UA) standard is employed to support the connectivity of the proposed monitoring system with other IT tools in an enterprise. The proposed monitoring system is validated in a laboratory as well as in machining and mold-making small and medium-sized enterprises (SMEs).

Commentary by Dr. Valentin Fuster
J. Comput. Inf. Sci. Eng. 2018;18(2):021006-021006-10. doi:10.1115/1.4039430.

For vision-based measurement, there are few research or professional tools for local contour positional errors of flexible automotive rubber strips. To support the automatic measurement of contour positional errors, a novel local contour registration and measurement method based on shape descriptors is proposed. In this method, a shape descriptor is proposed to find correspondence between a reference local contour and a desired local contour. First, a shape descriptor that includes the shape representation and restrictions of the local contour is extracted from the reference contour. Second, several tolerable shape descriptors for a desired actual local contour are constructed by adding some loosening factors to the ideal descriptor, and an angular similarity-based searching strategy is used to find the best actual local contour. Finally, from the matched local point sets, a quantitative calculation step provides the desired deviation values. This method is implemented in a sealing strip cross section measurement system, and numerous cross-sectional profiles are tested. The experimental results verify the stability and effectiveness of the proposed method. Important progress toward the automatic measurement of flexible products is demonstrated.

Commentary by Dr. Valentin Fuster
J. Comput. Inf. Sci. Eng. 2018;18(2):021007-021007-14. doi:10.1115/1.4039431.

Information leakage can lead to loss of intellectual property and competitive edge. One of the primary sources of information leakage in collaborative design is sharing confidential information with collaborators, who may be also collaborating with competitors. Hiding information from collaborators is challenging in codesign because it can lead to inferior and suboptimal solutions. Therefore, there is a need for techniques that enable designers to protect confidential information from their collaborators while achieving solutions that are as good as those obtained when full information is shared. To address this need, we propose a secure codesign (SCD) framework that enables designers to achieve optimal solutions without sharing confidential information. It is built on two principles: adding/multiplying a parameter with a large random number hides the value of the parameter, and adding/multiplying a large number is orders of magnitude faster than using existing cryptographic techniques. Building on the protocols for basic arithmetic computations, developed in our earlier work, we establish protocols for higher order computations involved in design problems. The framework is demonstrated using three codesign scenarios: requirements-driven codesign, objective-driven codesign, and Nash noncooperation. We show that the proposed SCD framework enables designers to achieve optimal solutions in all three scenarios. The proposed framework is orders of magnitude faster than competing (but impractical for engineering design) cryptographic methods such as homomorphic encryption, without compromising on precision in computations. Hence, the proposed SCD framework is a practical approach for maintaining confidentiality of information during codesign.

Commentary by Dr. Valentin Fuster
J. Comput. Inf. Sci. Eng. 2018;18(2):021008-021008-10. doi:10.1115/1.4039193.

Random variables are commonly encountered in engineering applications, and their distributions are required for analysis and design, especially for reliability prediction during the design process. Distribution parameters are usually estimated using samples. In many applications, samples are in the form of intervals, and the estimated distribution parameters will also be in intervals. Traditional reliability methodologies assume independent interval distribution parameters, but as shown in this study, the parameters are actually dependent since they are estimated from the same set of samples. This study investigates the effect of the dependence of distribution parameters on the accuracy of reliability analysis results. The major approach is numerical simulation and optimization. This study demonstrates that the independent distribution parameter assumption makes the estimated reliability bounds wider than the true bounds. The reason is that the actual combination of the distribution parameters may not include the entire box-type domain assumed by the independent interval parameter assumption. The results of this study not only reveal the cause of the imprecision of the independent distribution parameter assumption, but also demonstrate a need of developing new reliability methods to accommodate dependent distribution parameters.

Commentary by Dr. Valentin Fuster
J. Comput. Inf. Sci. Eng. 2018;18(2):021009-021009-12. doi:10.1115/1.4039455.

Additive manufacturing (AM) offers significant opportunities for product innovation in many fields provided that designers are able to recognize the potential values of AM in a given product development process. However, this may be challenging for design teams without substantial experience with the technology. Design inspiration based on past successful applications of AM may facilitate application of AM even in relatively inexperienced teams. While designs for additive manufacturing (DFAM) methods have experimented with reuse of past knowledge, they may not be sufficient to fully realize AM's innovative potential. In many instances, relevant knowledge may be hard to find, lack context, or simply unavailable. This design information is also typically divorced from the underlying logic of a products' business case. In this paper, we present a knowledge based method for AM design ideation as well as the development of a suite of modular, highly formal ontologies to capture information about innovative uses of AM. This underlying information model, the innovative capabilities of additive manufacturing (ICAM) ontology, aims to facilitate innovative use of AM by connecting a repository of a business and technical knowledge relating to past AM products with a collection of knowledge bases detailing the capabilities of various AM processes and machines. Two case studies are used to explore how this linked knowledge can be queried in the context of a new design problem to identify highly relevant examples of existing products that leveraged AM capabilities to solve similar design problems.

Commentary by Dr. Valentin Fuster
J. Comput. Inf. Sci. Eng. 2018;18(2):021010-021010-8. doi:10.1115/1.4039638.

This paper presents two bio-inspired algorithms for coalition formation of multiple modular robot systems. An effective and efficient coalition formation system can help modular robot system take full advantage of reconfigurability of modular robots. In this paper, the multirobot coalition formation problem is illustrated and a mathematical model for the problem is described. Two bio-inspired algorithms, ant-colony algorithm (ACA) and genetic algorithm (GA), are introduced for solving the mathematical model. With the two algorithms, it is able to form a large number of robots into many different groups for a variety of applications, such as parallel performance of multiple tasks by multiple teams of robots. The paper compares the efficiency and effectiveness of two algorithms for solving the presented problem with case study. The results for the comparison study are analyzed and discussed. Also, the implementation details of the simulation and experiment using ACA are presented in the paper.

Commentary by Dr. Valentin Fuster
J. Comput. Inf. Sci. Eng. 2018;18(2):021011-021011-8. doi:10.1115/1.4039476.
FREE TO VIEW

In this paper, we present a pattern development method for soft product design. We utilize a surface fattening method based on a mass-spring model to create 2D patterns unfolding from a three-dimensional (3D) model. Multilevel meshes are proposed to expedite the flattening process, and a boundary optimization method is employed to guarantee 2D patterns can be sewn well. We apply the proposed method to the design of real soft products. Experimental results show that it can deal with complex surfaces efficiently and robustly, and manufactured products are satisfactory.

Commentary by Dr. Valentin Fuster
J. Comput. Inf. Sci. Eng. 2018;18(2):021012-021012-7. doi:10.1115/1.4039640.

The paper discusses thin part inspection using three-dimensional (3D) non rigid registration. The main objective is to match measurement point data to its nominal representation, so as to identify form defects. Since form defects have the same size order as the thickness of the part, establishing such matching is a challenging task. The originality of the method developed in this paper is using a deformable iterative closet point algorithm (ICP), and integrating modal approach to express form defects. The method described improves the matching through iteration of the ICP and establishes a definition of the error. The results of the application show that the present method is efficient.

Commentary by Dr. Valentin Fuster
J. Comput. Inf. Sci. Eng. 2018;18(2):021013-021013-9. doi:10.1115/1.4039639.

We present a unified method for numerical evaluation of volume, surface, and path integrals of smooth, bounded functions on implicitly defined bounded domains. The method avoids both the stochastic nature (and slow convergence) of Monte Carlo methods and problem-specific domain decompositions required by most traditional numerical integration techniques. Our approach operates on a uniform grid over an axis-aligned box containing the region of interest, so we refer to it as a grid-based method. All grid-based integrals are computed as a sum of contributions from a stencil computation on the grid points. Each class of integrals (path, surface, or volume) involves a different stencil formulation, but grid-based integrals of a given class can be evaluated by applying the same stencil on the same set of grid points; only the data on the grid points changes. When functions are defined over the continuous domain so that grid refinement is possible, grid-based integration is supported by a convergence proof based on wavelet analysis. Given the foundation of function values on a uniform grid, grid-based integration methods apply directly to data produced by volumetric imaging (including computed tomography and magnetic resonance), direct numerical simulation of fluid flow, or any other method that produces data corresponding to values of a function sampled on a regular grid. Every step of a grid-based integral computation (including evaluating a function on a grid, application of stencils on a grid, and reduction of the contributions from the grid points to a single sum) is well suited for parallelization. We present results from a parallelized CUDA implementation of grid-based integrals that faithfully reproduces the output of a serial implementation but with significant reductions in computing time. We also present example grid-based integral results to quantify convergence rates associated with grid refinement and dependence of the convergence rate on the specific choice of difference stencil (corresponding to a particular genus of Daubechies wavelet).

Commentary by Dr. Valentin Fuster
J. Comput. Inf. Sci. Eng. 2018;18(2):021014-021014-11. doi:10.1115/1.4039850.

Design changes are necessary to sustain the product against competition. Due to technical, social, and financial constraints, an organization can only implement a few of many change alternatives. Hence, a wise selection of a change alternative is fundamentally influential for the growth of the organization. Organizations lack knowledge bases to effectively capture rationale for a design change; i.e., identifying the potential effects a design change. In this paper, (1) we propose a knowledge base called multiple-domain matrix that comprises the relationships among different parameters that are building blocks of a product and its manufacturing system. (2) Using the indirect change propagation method, we capture these relationships to identify the potential effects of a design change. (3) We propose a cost-based metric called change propagation impact (CPI) to quantify the effects that are captured from the multiple-domain matrix. These individual pieces of work are integrated into a web-based tool called Vatram. The tool is deployed in a design environment to evaluate its usefulness and usability.

Topics: Design
Commentary by Dr. Valentin Fuster
J. Comput. Inf. Sci. Eng. 2018;18(2):021015-021015-11. doi:10.1115/1.4039849.

Disassembly, a process of separating the end of life (EOL) product into discrete components for re-utilizing their associated residual values, is an important enabler for the sustainable manufacturing. This work focuses on the modeling of the disassembly planning related information and develops a disassembly information model (DIM) based on an extensive investigation of various informational aspects in the domain of disassembly planning. The developed DIM, which represents an appropriate systematization and classification of the products, processes, uncertainties, and degradations related information, follows a layered modeling methodology in which DIM is subdivided into layers with the intent to separate general knowledge into different levels of abstractions and reach a balance between information reusability and information usability. Two prototype disassembly planning related applications have been incorporated to validate the usability and reusability of the developed DIM.

Commentary by Dr. Valentin Fuster
J. Comput. Inf. Sci. Eng. 2018;18(2):021016-021016-10. doi:10.1115/1.4039901.

In many system-engineering problems, such as surveillance, environmental monitoring, and cooperative task performance, it is critical to allocate limited resources within a restricted area optimally. Static coverage problem (SCP) is an important class of the resource allocation problem. SCP focuses on covering an area of interest so that the activities in that area can be detected with high probabilities. In many practical settings, primarily due to financial constraints, a system designer has to allocate resources in multiple stages. In each stage, the system designer can assign a fixed number of resources, i.e., agents. In the multistage formulation, agent locations for the next stage are dependent on previous-stage agent locations. Such multistage static coverage problems are nontrivial to solve. In this paper, we propose an efficient sequential sampling algorithm to solve the multistage static coverage problem (MSCP) in the presence of resource intensity allocation maps (RIAMs) distribution functions that abstract the event that we want to detect/monitor in a given area. The agent's location in the successive stage is determined by formulating it as an optimization problem. Three different objective functions have been developed and proposed in this paper: (1) L2 difference, (2) sequential minimum energy design (SMED), and (3) the weighted L2 and SMED. Pattern search (PS), an efficient heuristic algorithm has been used as optimization algorithm to arrive at the solutions for the formulated optimization problems. The developed approach has been tested on two- and higher dimensional functions. The results analyzing real-life applications of windmill placement inside a wind farm in multiple stages are also presented.

Commentary by Dr. Valentin Fuster
J. Comput. Inf. Sci. Eng. 2018;18(2):021017-021017-14. doi:10.1115/1.4039432.

Recently, social media has emerged as an alternative, viable source to extract large-scale, heterogeneous product features in a time and cost-efficient manner. One of the challenges of utilizing social media data to inform product design decisions is the existence of implicit data such as sarcasm, which accounts for 22.75% of social media data, and can potentially create bias in the predictive models that learn from such data sources. For example, if a customer says “I just love waiting all day while this song downloads,” an automated product feature extraction model may incorrectly associate a positive sentiment of “love” to the cell phone's ability to download. While traditional text mining techniques are designed to handle well-formed text where product features are explicitly inferred from the combination of words, these tools would fail to process these social messages that include implicit product feature information. In this paper, we propose a method that enables designers to utilize implicit social media data by translating each implicit message into its equivalent explicit form, using the word concurrence network. A case study of Twitter messages that discuss smartphone features is used to validate the proposed method. The results from the experiment not only show that the proposed method improves the interpretability of implicit messages, but also sheds light on potential applications in the design domains where this work could be extended.

Commentary by Dr. Valentin Fuster

Sorry! You do not have access to this content. For assistance or to subscribe, please contact us:

  • TELEPHONE: 1-800-843-2763 (Toll-free in the USA)
  • EMAIL: asmedigitalcollection@asme.org
Sign In