0

IN THIS ISSUE

### Research Papers

J. Comput. Inf. Sci. Eng. 2008;8(3):031001-031001-9. doi:10.1115/1.2956995.

Nature can be a major source of inspiration for engineering designers. Biomimicry is often used in specific cases to develop solutions that mimic natural systems. However, knowledge of natural systems is still not used systematically and commonly for inspiring innovative product development, from ideation of solutions to their implementation as products. In ideation, potential solutions to a design problem are generated. To support ideation, two databases are developed with entries having information about natural and artificial systems. A novel generic causal model is developed for structuring information of how these systems achieve their behavior. Three algorithms are developed for analogical search of entries that could inspire ideation of solutions to a given problem. In realization, evaluation and modification of these solutions are carried out by experimenting with these in virtual and physical forms and environments.

Commentary by Dr. Valentin Fuster
J. Comput. Inf. Sci. Eng. 2008;8(3):031002-031002-10. doi:10.1115/1.2960487.

Adaptive reuse of archived parametric finite element analysis (FEA) models involves integration of new information into archived models to model similar new problems. Retrieval of relevant archived models and supporting documents from electronic repositories is difficult when a modeler is unable to describe information needs precisely in a query using keywords. The use of description logic (DL) concepts to describe archived models and build expandable classification hierarchies to facilitate retrieval is proposed and illustrated. A domain-independent retrieval algorithm based on the traversal of description logic concept hierarchies is introduced. The usefulness of the approach is asserted by showing that precise classifications of FEA models can be automatically computed from semantically rich representations in a fairly inexpressive DL using subsumption. The usefulness of subsumption hierarchies for efficient retrieval of FEA models illustrates the benefits of DL for their automated management.

Commentary by Dr. Valentin Fuster
J. Comput. Inf. Sci. Eng. 2008;8(3):031003-031003-11. doi:10.1115/1.2955481.

Rapid advancement of 3D sensing techniques has led to dense and accurate point cloud of an object to be readily available. The growing use of such scanned point sets in product design, analysis, and manufacturing necessitates research on direct processing of point set surfaces. In this paper, we present an approach that enables the direct layered manufacturing of point set surfaces. This new approach is based on adaptive slicing of moving least squares (MLS) surfaces. Salient features of this new approach include the following: (1) It bypasses the laborious surface reconstruction and avoids model conversion induced accuracy loss. (2) The resulting layer thickness and layer contours are adaptive to local curvatures, and thus it leads to better surface quality and more efficient fabrication. (3) The curvatures are computed from a set of closed formula based on the MLS surface. The MLS surface naturally smoothes the point cloud and allows upsampling and downsampling, and thus it is robust even for noisy or sparse point sets. Experimental results on both synthetic and scanned point sets are presented.

Commentary by Dr. Valentin Fuster
J. Comput. Inf. Sci. Eng. 2008;8(3):031004-031004-7. doi:10.1115/1.2956997.

Data envelopment analysis (DEA) has been widely applied in evaluating multicriteria decision making problems, which have multi-input and multi-output. However, the traditional DEA method does neither take the decision maker’s subjective preferences to the individual criteria into consideration nor rank the selected options or decision making units (DMUs). On the other hand, Satty’s analytical hierarchy process (AHP) was established to rank options or DMUs under multi-input and multi-output through pairwise comparisons. However, in most cases, the AHP pairwise comparison method is not perfectly consistent, which may give rise to confusions in determining the appropriate priorities of each criterion to be considered. The inconsistency implicates the fuzziness in generating the relative important weight for each criterion. In this paper, a novel method, which employs both DEA and AHP methods, is proposed to evaluate the overall performance of suppliers’ involvement in the production of a manufacturing company. This method has been developed through modifying the DEA method into a weighting constrained DEA method by using a piecewise triangular weighting fuzzy set, which is generated from the inconsistent AHP comparisons. A bias tolerance ratio (BTR) is introduced to represent the varying but restrained weighting values of each criterion. Accordingly, the BTR provides the decision maker a controllable parameter by tightening or loosening the range of the weighting values in evaluating the overall performance of available suppliers, which in hence, overcomes the two weaknesses of the traditional DEA method.

Commentary by Dr. Valentin Fuster
J. Comput. Inf. Sci. Eng. 2008;8(3):031005-031005-11. doi:10.1115/1.2956990.

A method for finite element analysis using a regular or structured grid is described that eliminates the need for generating conforming mesh for the geometry. The geometry of the domain is represented using implicit equations, which can be generated from traditional solid models. Solution structures are constructed using implicit equations such that the essential boundary conditions are satisfied exactly. This approach is used to solve boundary value problems arising in thermal and structural analysis. Convergence analysis is performed for several numerical examples and the results are compared with analytical and finite element analysis solutions to show that the method gives solutions that are similar to the finite element method in quality but is often less computationally expensive. Furthermore, by eliminating the need for mesh generation, better integration can be achieved between solid modeling and analysis stages of the design process.

Commentary by Dr. Valentin Fuster
J. Comput. Inf. Sci. Eng. 2008;8(3):031006-031006-10. doi:10.1115/1.2960489.

This paper presents an approach to automatically recover mesh surfaces with sharp edges for solids from their binary volumetric discretizations (i.e., voxel models). Our method consists of three steps. The topology singularity is first eliminated on the binary grids so that a topology correct mesh $M0$ can be easily constructed. After that, the shape of $M0$ is refined, and its connectivity is iteratively optimized into $Mn$. The shape refinement is governed by the duplex distance fields derived from the input binary volume model. However, the refined mesh surface lacks sharp edges. Therefore, we employ an error-controlled variational shape approximation algorithm to segment $Mn$ into nearly planar patches and then recover sharp edges by applying a novel segmentation-enhanced bilateral filter to the surface. Using the technique presented in this paper, smooth regions and sharp edges can be automatically recovered from raw binary volume models without scalar field or Hermite data Compared to other related surface recovering methods on binary volume, our algorithm needs less heuristic coefficients.

Commentary by Dr. Valentin Fuster
J. Comput. Inf. Sci. Eng. 2008;8(3):031007-031007-13. doi:10.1115/1.2960490.

Modeling the milling process requires cutter/workpiece engagement (CWE) geometry in order to predict cutting forces. The calculation of these engagements is challenging due to the complicated and changing intersection geometry that occurs between the cutter and the in-process workpiece. This geometry defines the instantaneous intersection boundary between the cutting tool and the in-process workpiece at each location along a tool path. This paper presents components of a robust and efficient geometric modeling methodology for finding CWEs generated during three-axis machining of surfaces using a range of different types of cutting tool geometries. A mapping technique has been developed that transforms a polyhedral model of the removal volume from the Euclidean space to a parametric space defined by the location along the tool path, the engagement angle, and the depth of cut. As a result, intersection operations are reduced to first order plane-plane intersections. This approach reduces the complexity of the cutter/workpiece intersections and also eliminates robustness problems found in standard polyhedral modeling and improves accuracy over the $Z$-buffer technique. The CWEs extracted from this method are used as input to a force prediction model that determines the cutting forces experienced during the milling operation. The reported method has been implemented and tested using a combination of commercial applications. This paper highlights ongoing collaborative research into developing a virtual machining system.

Commentary by Dr. Valentin Fuster
J. Comput. Inf. Sci. Eng. 2008;8(3):031008-031008-8. doi:10.1115/1.2966384.

Multiphysics applications are real world problems with a large number of different shape components that obey different physical laws and manufacturing constraints and interact with each other through geometric and physical interfaces. They demand accurate and efficient solutions and a modern type of computational modeling, which designs the whole physical system with as much detail as possible. The simulation of gas turbine engine is such a multiphysics application and is realized with GasTurbnLab, an agent-based Multiphysics Problem Solving Environment (MPSE). Its performance and evaluation study is presented in this paper. For this, a short description of the software components and hardware infrastructure is given. The performance and the scalability of the parallelism are depicted, and the communication overhead between agents is studied with respect to the number of agents and their location in the “computational grid.” The execution time is recorded, and its analysis verifies the complexity of the solvers in use and the performance of the available hardware. Three different clusters of INTEL Pentium processors were used for experimentation to study how the communication time was affected by processor’s homogeneity/heterogeneity and the different connections between the processors. The study of the numerical experiments shows that the domain decomposition and interface relaxation methodology, along with the usage of agent platforms, does not increase the complexity of the simulation problem, and the communication cost is too low, compared with the computations, to reflect on the total simulation time. Therefore, GasTurbnLab is an efficient example of a complex physical phenomena simulation.

Commentary by Dr. Valentin Fuster
J. Comput. Inf. Sci. Eng. 2008;8(3):031009-031009-12. doi:10.1115/1.2956992.

Point cloud construction using digital fringe projection (PCCDFP) is a noncontact technique for acquiring dense point clouds to represent the 3D shapes of objects. Most existing PCCDFP systems use projection patterns consisting of straight fringes with fixed fringe pitches. In certain situations, such patterns do not give the best results. In our earlier work, we have shown that for surfaces with large range of normal directions, patterns that use curved fringes with spatial pitch variation can significantly improve the process of constructing point clouds. This paper describes algorithms for automatically generating adaptive projection patterns that use curved fringes with spatial pitch variation to provide improved results for an object being measured. We also describe the supporting algorithms that are needed for utilizing adaptive projection patterns. Both simulation and physical experiments show that adaptive patterns are able to achieve improved performance, in terms of measurement accuracy and coverage, as compared to fixed-pitch straight fringe patterns.

Commentary by Dr. Valentin Fuster

### Technology Reviews

J. Comput. Inf. Sci. Eng. 2008;8(3):034001-034001-6. doi:10.1115/1.2960488.

Agile methods of software development promote the use of flexible architectures that can be rapidly refactored and rebuilt as necessary for the project. In the mechanical engineering domain, software tends to be very complex and requires the integration of several modules that result from the efforts of large numbers of programmers over several years. Such software needs to be extensible, modular, and adaptable so that a variety of algorithms can be quickly tested and deployed. This paper presents an application of the unified process (UP) to the development of a research process planning system called CyberCut. UP is used to (1) analyze and critique early versions of CyberCut and (2) to guide current and future developments of the CyberCut system. CyberCut is an integrated process planning system that converts user designs to instructions for a computer numerical control (CNC) milling machine. The conversion process involves algorithms to perform tasks such as feature extraction, fixture planning, tool selection, and tool-path planning. The UP-driven approach to the development of CyberCut involves two phases. The inception phase outlines a clear but incomplete description of the user needs. The elaboration phase involves iterative design, development, and testing using short cycles. The software makes substantial use of design patterns to promote clean and well-defined separation between and within components to enable independent development and testing. The overall development of the software tool took about two months with five programmers. It was later possible to easily integrate or substitute new algorithms into the system so that programming resources were more productively used to develop new algorithms. The experience with UP shows that methodologies such as UP are important for engineering software development where research goals, technology, algorithms, and implementations show dramatic and frequent changes.

Commentary by Dr. Valentin Fuster