0
Technical Brief

Mass Customized Design of Cosmetic Masks Using Three-Dimensional Parametric Human Face Models Constructed From Anthropometric Data PUBLIC ACCESS

[+] Author and Article Information
Chih-Hsing Chu

Department of Industrial Engineering
and Engineering Management,
National Tsing Hua University,
Hsinchu 30013, Taiwan

I-Jan Wang

Department of Industrial Engineering
and Engineering Management,
National Tsing Hua University,
Hsinchu 30013, Taiwan
e-mail: chchu@ie.nthu.edu.tw

Manuscript received October 15, 2017; final manuscript received February 4, 2018; published online June 12, 2018. Assoc. Editor: Jitesh H. Panchal.

J. Comput. Inf. Sci. Eng 18(3), 034501 (Jun 12, 2018) (12 pages) Paper No: JCISE-17-1231; doi: 10.1115/1.4039335 History: Received October 15, 2017; Revised February 04, 2018

Cosmetic mask is a popular skincare product widely accepted by the youth and female. Most cosmetic masks in the current market offer very few sizes to choose from, thus producing misfit masks with reduced wearing comfort and skincare functionality. This paper describes how to realize customized design of cosmetic masks using three-dimensional (3D) parametric face models derived from a large amount of scanned facial data. The parametric models approximate individual faces using a nonlinear regression model controlled by a set of facial parameters easy to be measured. They serve as effective reference geometry to conduct 3D mask design. A prototyping mask design system implementing the parametric modeling method demonstrates the customized design process. The system allows the user to construct the mask shape directly on 3D meshes of a face model by specifying inner and outer boundary curves. An automatic flattening function unfolds the trimmed meshes into a two-dimensional (2D) pattern with a reduced shape distortion. This research enhances the practical value of large-scale anthropometric data by realizing human centric design customization using cosmetic facial mask as an example.

FIGURES IN THIS ARTICLE
<>

Economics globalization has made a change to consumer's awareness of modern products. Only meeting functional requirements no longer guarantees success of new products in the market. Most people expect that a product is able to satisfy individual needs and provide desirable user experience [1]. For that reason, customized design has become an effective approach for companies to increase product's values and to differentiate from their competitors. Customized design emphasizes effective communications between product developers and users at early design stages of the product development process. This is particularly important for the products highly related to human body [2]. Human centric design has thus emerged in recent years and received much attention in industry.

Cosmetic and skin care products are high-volume consumer commodity in today's market. Among them, cosmetic face mask recently has gain tremendous popularity in young generation and the female. The effects of a facial mask treatment include revitalizing, healing, or refreshing. Those effects may be negatively influenced by misfit masks that fail to cover the user's face closely. A typical example is that a face mask designed for an Asian woman usually does not fit well to a female user in the western world. Cosmetic face mask is thus a product of the necessity of customized design. Unfortunately, most cosmetic masks in the market offer very few sizes to choose from, thus often producing misfit masks and wearing discomfort. The mask design needs to consider the difference in individual face geometry. To capture a person's three-dimensional (3D) face geometry is not an easy task, though. The face capturing process normally involves use of specialized scanning devices. To process the scanned data is lengthy and far from fully automated. This results in a major obstacle for customized design of the products related to human face [3].

Parametric modeling techniques have been commonly used in the area of engineering design. Most modern cad software tools have adopted parametric modeling for constructing, maintaining, and recording two-dimensional (2D)/3D part drawings. Parametric modeling of free-form geometry remains a challenging task in most applications, but it may provide an effective means for quick generation of approximate individual face geometry that overcomes the obstacle mentioned above. Therefore, this paper presents a parametric modeling procedure of 3D human face that supports mass customized design of cosmetic face mask. A nonlinear representation scheme is proposed to construct 3D face models from anthropometric data obtained from scanning. The geometry of those models can be quickly controlled and manipulated by a set of feature parameters defined in anatomy and easy to be measured. The model generated by the parameter values measured from a user accurately approximates to his/her face geometry. The result works as an effective reference for constructing the mask shape that closely matches the user's face. A prototyping design system implementing the parametric modeling method illustrates the customized design process. Quantitative analysis is conducted to verify the effectiveness of the modeling method. This work demonstrates the practical value of large-scale anthropometric data on human centric design. The content of this paper is extended from a paper presented at the 2017 ASME CIE Conference (see Ref. [3]).

Cosmetic face mask has recently become very popular with young and female consumers. There is a wide variety of facial masks in the market; their main differences lie in the chemical ingredients contained. In contrast, most facial masks have only one size, or limited sizing options, such as large, medium, and small sizes. Most products have not considered the degree of fit to various facial shapes or generic differences among ethnic groups. The design reference of the mask shape could be simply the face of someone not representing any particular group of people. This is because that the representative face shape is not available. Lacking of theoretical basis, the reference geometry does not guarantee the design quality derived from it [4]. To meet individual needs of a particular ethnic group is highly difficult, if not impossible. Consumers are forced to choose from a limited number of predefined sizes to find a mask close to their specific face geometry.

The degree of fit to the user's face can largely affect the effectiveness of a cosmetic mask and the wearing comfort. Low-cost mass customized design of cosmetic masks is still not feasible in the current market. However, the progress of 3D scanning technology [5] enables quick and accurate collecting anthropometric data of human body. Product designers can extract valuable design information by analyzing these data. Statistical methods have been applied to construct 3D human models to assist the development and evaluation of product designs. Loker et al. [6] developed a method of using body scanning data to modify the parameters of clothes and increase the degree of fit. This study discussed the design of size differences among target customers and modification of the design of each size to fit scanned data within the customer group. Liu [7] utilized captured images of outer ears to measure important ear-related parameters, thus performing accurate design of the products such earphone and earplug. The study adopted the analyses of variance method to characterize the correlation between gender, age, and measured parameters.

To realize the idea of customized design, the most intuitive approach is to scan the whole body, or a single part of a user to acquire the body geometry. The product design is then constructed or adjusted accordingly with computer-aided design tools. Istook and Hwang [8] described potential applications of body scanning data in the fashion industry and discussed the research direction of mass customization by integrating 3D data and cad software. The similar idea has been adopted in the development of medical ancillary products, including the structure and production of dentures [9], as well as modeling of ear-canal-type electronic hearing aids [10]. In both studies, the first step of customized design is to obtain the 3D geometry of the user's body, which involves use of expensive 3D scanning devices. This may significantly increase the cost of product development. Availability of the scanning devices also limits realization of mass customization in practice. Lacko et al. [11] proposed a constrained K-medoids clustering method for sizing of head-related products. The method outperforms previous feature-based sizing and shape-based clustering approaches. They suggested that using shape models for product sizing result in a better fit for near-body products.

Luximon et al. [12] developed an accurate 3D head and face model for the Chinese and provide meaningful statistical results of 3D head and face shapes. Surface modeling algorithm based on point cloud data was applied in building the model with anatomical and virtual landmarks. The variations of head and face shapes analyzed using principal component analysis (PCA) lead to interesting findings in the head and face sizes of the Chinese. Their later work [13] discussed three types of 3D scanners used for scanning human head and face: Cyberware 3030 color scanner, Artec Eva 3D scanner, and structure sensor ST01 mode. They provide an overview of possible advantages and limitations of all the three scanners. Ellena et al. [14] proposed new digital head models representing the adult cyclists in Australia. Four models were generated based on an Australian 3D anthropometric database of head shapes and a modified hierarchical clustering algorithm. Considerable shape differences were identified between those models and the current Australian standard. Huang et al. [15] tested a large number of combinations of feature parameters on modeling 3D facial data of Taiwanese workers. They developed a method to geometrically estimate the degrees of comfort and leakage in wearing aspiratory masks. The best combination of the feature parameters was determined based on the estimating results.

The above literature review shows that few studies have looked into parametric modeling of 3D human face for customized product designs. Analysis of anthropometric database was mostly focused on discovering key feature dimensions and/or size classification. Different from medical products, cosmetic facial mask is considered as a lost-cost high-volume consumer product, which customized design methods have to be simple but cost-effective. For this purpose, this work develops a parametric modeling procedure that generates 3D models approximating to individual face geometries. The focus is to support mass-customized design of cosmetic masks. The modeling procedure is extended from our previous works [16,17] by adopting nonlinear regression equations that connects 3D mesh of a face model and controlling parameter values. Test results show that the nonlinear models better interpret fine variations in different regions of a human face than the linear ones.

To design facial products directly on the face model of a real person is highly desirable to ensure design quality. The face model can be constructed by 3D scanning of the person's face. However, the construction process normally involves manual data processing and thus does not support real-time applications. An effective approach overcoming these deficiencies is to derive parametric models from anthropometric data that approximates the face geometry by direct feature parameters easy to be measured.

Collect Training Facial Data.

Parametric face models are constructed from a sufficient amount of 3D face models serving as training data. We acquire the facial geometry of 150 Taiwanese female subjects at the age from 18 to 25 using a 3D facial scanner. This scanner is a noncontact measuring device based on the principle of structured-light that simultaneously captures the depth and color images of a subject1. Two or multiple devices are usually required to reconstruct the facial geometry from different view angles. This work employs two scanners to collect facial data and their setting is shown in Fig. 1. The subject needs to tightly wear a swimming hat to cover his/her hair during the capturing process. We manually mark a number of landmarks on each subject's face to define the region of interest. Those landmarks include right and left sides of the jaw, left and right temples, and menton. Each scanner needs to be able to “see” the side of the jaw, the temple, and menton from its viewing angle, marked as green points in Fig. 2.

The principle of structured-light allows multiple scanner to quickly take images in a flash of time. The captured data are output in the form of textured 3D meshes as shown in Fig. 3. In addition to the coordinate values, each vertex in the meshes is associated with the RGB color information. However, to merge the images acquired from different angles (or scanners) into one single face may become problematic. The face boundary cannot be identically defined or controlled in each angle when it is being captured. A registration procedure is developed to establish the spatial relationship among different camera coordinate systems based on the correspondence of reference points. The registration procedure helps to automatically combine the data captured from different angles to a large extent. However, to precisely merge the data still requires manual pre- and postprocessing steps that eliminate possibly noises, overlaps, inverted vertices, and voids inevitably occurring in the merging process. A procedure shown in Fig. 4 is proposed to clean up the merged data in this regard.

The structured-light-based face scanner cannot precisely capture the shape of human hair. The existence of human hair may sometimes induce measurement noise. Each subject is asked to wear a swimming hat to entirely cover his/her hair for this purpose. The second step in the preprocessing is to mark the feature points previously described on the subject's face to delimit the face region of our interest. A green sticker is manually placed on each of the feature points before performing the scanning. Typical raw data obtained from the scanning are shown in Fig. 5(a). Two scanners were used to take images in this case. A series of postprocessing steps is applied to produce a valid 3D face model from multiple images. Those steps can be divided into two stages for creating a single face model: registration and integration. Two point sets captured from different view angles are usually merged by 3D registration. The registration procedure calculates a spatial transformation that aligns those two point sets. Integration is the process of generating a single surface representation from multiple images. Let PR and PL be two landmark sets in a three-dimensional space containing the same number of points. The landmarks must appear simultaneously in both data sets acquired from different angles. The problem is to find a transformation T applied to PR such that the difference between T(PR) and PL is minimized, i.e., Display Formula

(1)MiniTpR,ipL,i2

Minimizing such a function in this case is equivalent to solving a least-squares problem. Those landmarks can be chosen from the feature points that are easily identified on a human face. In this work, the inner corner of both eyes and subnasale in anatomy are adopted as three major landmarks for solving Eq. (1), as shown in Fig. 5(b).

Holes often exit in the raw data obtained by the structured light sensors due to reflection failure or noise. Special processing operations are thus required to repair the mesh model containing holes. The next step of the postprocess procedure is to fill holes or voids. Many hole filling methods have been proposed in the past literature. They can be divided into two categories: voxel-based and triangle-based approaches. We adopt the hole filling algorithm developed by Zhao et al. [18] in this work. The advancing front mesh technique is first used to generate a new triangular mesh to cover the holes; then the Poisson equation is applied to optimize the new mesh.

A nonoverlap mesh is always desired for applications of 3D modeling. Unfortunately, overlap is unavoidable in data collection process as a structured-light scanner always contains some overlaps between the images taken from different views. The existence of overlaps will complicate construction of 3D face geometry and thus needs to be eliminated. A polygon zipper algorithm proposed by Turk and Levoy [19] is applied to create one mesh from two data sets. This algorithm mainly consists of two steps: (1) constructing a mesh that reflects the topology of the final object and (2) adjusting the vertex positions of the mesh by averaging the geometry present in both images. The topology is created by merging pairs of meshes each created from a single image with overlap greater than a threshold value. This is finished by simultaneously searching back the boundaries of each mesh located directly on top of the other mesh. Next, the meshes with overlap are merged by clipping the triangles of one mesh to the boundary of the other and the vertices on the boundary are shared. Figure 6 shows the results merging from two image data.

Cosmetic mask works for skin beautification and only covers part of a human face, not the entire human head. To construct parametric head models is thus not necessary for the mask design. Besides, the swimming hat wearing by the subject increases the complexity of the head geometry by introduction tiny wrinkles. To simulate such fine variations using parametric models is highly difficult. For simplification purpose, only the front face is preserved for constructing parametric models. We generate the region of interest by trimming the head model using those feature points marked on the subject's face (see Fig. 2). A sphere sp(c, r) is first constructed by the right/left sides of the jaw and the right/left temples. Those feature points do not have an explicit definition in either depth or color image obtained by the scanner. Chosen by the experimenter under best discretion, their positions may vary inconsistently from one subject's face to another's. The region of interest for retaining cosmetic mask must cover the jaw. To ensure this condition, the last feature point menton m must lie within the sphere constructed by the other four points, i.e., Display Formula

(2)cmr+ε

where ε is a tolerance given by the user. If the above equation is not satisfied, the sphere is expanded as sp*(c, r*) with a new radius Display Formula

(3)r*=cm+ε

The region of interest R, which will be parametrically represented, is created by intersecting the face model with sp*. The result is shown in Fig. 7.

The scanned data may contain small fluctuations between triangular meshes induced by random noises in the light source or regions hiding from occlusions. A geometric operation namely Laplacian smoothing [20] is thus applied in the last step to improve the mesh quality by eliminating those small variations. Assume a vertex v in the current face model has k neighboring vertices vj connecting to it. The Laplacian Smoothing operation replaces the vertex with a new position calculated by averaging its neighboring vertices as Display Formula

(4)v*=v+λ·Δp
Display Formula
(5)Δp=1kj=1kvjv

where λ is a given value controlling the smoothing result. Figure 8 compares the mesh models before and after applying the smoothing operation.

Constructing Three-Dimensional Parametric Face Models.

Most parametric modeling methods of free form geometry work on meshes of the same topology and connectivity. This condition may not be guaranteed across different face models generated in Sec. 3.1. Additional operations are required to process those models to ensure that they serve as valid training data to determine a parametric representation of 3D facial geometry. Our previous work [17] provides detailed descriptions of those operations, which are briefly discussed as follows. We first conduct random uniform resampling on a face model to adjust the number of its mesh points. The purpose of the resampling is to choose a subset of the vertices of such that they are as evenly spaced as possible on the model. This operation ensures that all the models contain the same number of mesh points. It can also reduce the number of meshes in a model. Adapt meshing is an effective approach that dynamically controls the precision of 3D geometry based on the requirements of an application in specific areas, which need precision while leaving the other areas at lower levels of precision and resolution. It can replace uniform sampling in constructing parametric models, thereby reducing the meshes in the models and the computational time of generating approximate face geometry.

Cross-parameterization is a bijective mapping technique commonly used in geometry processing applications. Typical cross-parameterization methods select data from one model as the example mesh (source) and its mesh connectivity serves as a template for adjusting the other models (target). The source and target models usually have similar features in their shapes. The idea of cross-parameterization is to preserve the structure of the models by creating the correspondence of these features. Connecting those features along the shortest paths divides a mesh model into a set of patches locally easy to be parameterized. Mean value parameterization is an effective approach to mapping two corresponding patches parametrically [21]. The source model can be transformed to the target model's shape through the mapping relationship patch by patch.

Parametric modeling techniques for free form geometry usually employ a kernel regression model that establishes the correlation between the coordinates of mesh points in a face model and a set of feature parameters. It is advantageous to keep a small number of parameters because of its low computational complexity in constructing the models. Those parameters should be easy to be measured in practical use. They may be chosen by experts based on anatomy knowledge or according to the precision of the models constructed by different groupings. The main idea of parametric face modeling is to synthesize a 3D model using a number of feature parameters measured from a user's face to approximate his/her face geometry. The previous study [15] discussed and compared the effectiveness of different parameter sets on controlling the face geometry. According to their findings, 12 one-dimensional parameters, derived from nineteen feature points of a human face, are most relevant ones. Figure 9 shows those nineteen feature points and Table 1 lists the definition for each of the feature parameters.

A 3D model may need to contain thousands (even tens of thousands) mesh vertices to capture fine variations in the face geometry. This results in high-dimensional parametric models difficult to be constructed mathematically. Any robust parametric models also need to deal with noises inevitably generated in the face scanning process. Principal component analysis techniques are thus applied to reduce the model dimensionality and to suppress the influence of random noises existing in the training data while preserving sufficient data variance. This method has been successfully used to process the anthropometric data of human body modeling.

A linear regression model based on the least-square errors is first used to correlate the mesh coordinates with the feature parameter values L. The mesh data reduced by PCA are denoted as W, which can be expressed with the relationship matrix X and the estimation error ε with respect to the original face geometry Display Formula

(6)WTf×k·Xk×m+ε=LTf×m
in which m is the number of feature parameters and f is the data dimension that has been reduced using PCA.

A linear kernel may produce satisfactory results for modeling simple geometries. It is interesting to compare its performance on approximating complex human face with nonlinear approaches. We thus apply a nonlinear regression representation Kriging to construct a different parametric model. Assume M represents the 3D coordinates of a training face model that has been cross-parameterized and processed using PCA. At a point x, a weighted average of some related points in individual faces is written as Display Formula

(7)Mex=wi·M(xi)

This estimated value may differ from the actual value at the point Max and the difference is referred to as the estimation error Display Formula

(8)εx=MexMax

The distribution of the estimate values about the true value refers to the estimation variance Display Formula

(9)σM2=i=1nMexMaxi2n

The idea behind Kriging is to choose the optimal weights that produce the minimum estimation error. Optimal weights that produce unbiased estimates with a minimum estimation variance are determined by solving Display Formula

(10)i=1nλi=1

This can be solved by introducing a Lagrange multiplier in minimizing the estimation variance Display Formula

(11)Lλi=0

Thus, the Kriging formula is written as Display Formula

(12)j=1nλjγ(xixj)+μ=γ(x0xi)

Once the weights λi are solved, the Kriging model can be finalized for statistical regression by given feature parameters.

A test example generated by the linear approach described above is shown in Fig. 10. The left and the middle columns represent the original face and the approximate result produced by parametric models, respectively. Although the approximate result resembles to the original face to some extent, they still have visible differences in the appearance. For example, the fine variation around the mouth is not evident in the approximate model. Its cheek width seems greater than that of the original. In contrast, the right column shows the result computed by the parameter models using Kriging with a Gaussian distribution. The original face looks quite similar to the approximate result with apparent improvement in the surface region around the mouth, which shows fine shape variations in this case. The parameter values listed in Table 2 support this statement. The Kriging-based model outperforms the linear one with a smaller deviation to the original value in all 12 parameters used in constructing the parametric models. The effectiveness of the Kriging-based parametric models for 3D human face is thus verified.

Several steps in the postprocessing process of scanned facial data are conducted manually. Those steps are conducted only once to produce training data. After the parametric models have been constructed from the processed data, to approximate face geometries using the models is automatic and quick. The parametric models thus constructed still support real-time applications, as shown in this section. The parametric modeling method described in Sec. 3.2 provides useful references to conduct customized designs for facial products. The results generated by the Kriging based models closely approximate to the real face by capturing its detailed variations. In this section, we use cosmetic facial masks as an example to illustrate how this idea works in practice. Cosmetic facial masks are a skin care product commonly used by the youth and female customers. Containing ingredients for different skincare functions, facial mask inhibits secretion of sebum and evaporation of moisture, promotes blood circulation, and diffuses nutrients into the skin deep. Good fit to the wearing face is important to assure full effectiveness of the mask functions. The higher the degree of fit, the more comfortable mask wearing becomes. Designing a cosmetic mask according to the user's face geometry may increase utilization of the mask material and eliminates waste induced by unnecessary oversizing. This requires a quick generation of individual face geometry with direct input of parameters.

A Prototyping Cosmetic Mask Design System.

A prototyping design tool is implemented for constructing facial masks based on the design process similar to the previous study [22]. The mask design process consists of the following steps shown in Fig. 11.

  1. (1)Generate a face model as the design reference using the nonlinear parametric models.
  2. (2)Construct the mask boundaries by specifying the corresponding curve control points directly on the face model (the blue curves in Fig. 11(a)).
  3. (3)Trim the mask shape out of the face model as shown in Fig. 11(b).
  4. (4)Unfold (develop) the mask shape into a plane through a mesh smoothing process (see Fig. 11(c)).

Step 1 implements a geometric design function that constructs curves directly on 3D meshes. The user first scatters a set of points on the meshes and specifies the curve going through these points. The position of the points can be modified at any time. A modified butterfly mask method [23] is applied to determine the precise curve shape in the following manner. First, the user instructs the system to connect the given points, thus generating an initial curve. For fine adjustment of the curve, the user gives the number of points to be added according to the side length of the meshes. These extra points can be adjusted through a computation process driven by a target function. In this design function, the goal is to increase the overall smoothness of the constructed surface region.

Step 2 refines the meshes on the surface passing through by the initial curve using constrained Delaunay triangulation [24]. In addition to the conditions imposed by normal Delaunay triangulation, the triangulation must satisfy the limit of constrained edges, or the result needs to contain certain given edges. Step 3 creates the final mask shape by removing the meshes not within the mask boundaries. As shown in Fig. 11(b), the surface regions near the eyes, nose, mouth, and the outer boundary have been trimmed from the model generated from the previous step.

Mask Smoothing Algorithm.

Cosmetic mask is made from planar materials such as paper and cotton material. Design for manufacturability is a critical issue that needs to be considered in the mask design process. For this purpose, an unfolding (or development) procedure is conducted to transform the 3D mask shape into a 2D pattern. Fabrication of the mask starts with cutting out the pattern shape from a planar material. We apply the flattening method proposed by Wang [25] to simulate the mask boundaries into tendon wires that can be bent, but not stretched or deformed, to maintain their length and shape while conducting the mask smoothing procedure. This method should guarantee the developability of the mask shape. Assume the meshes surrounded by a user-defined boundary curve is Mp, the number of vertices along the edge is bi, i = 1 ∼ n. There exist many solutions to preserve the length of the boundary curve. The solution closest to the shape defined by the boundary curve should be selected. Therefore, the following condition is imposed on the smoothing process: Display Formula

(13)Mini=1n(αiβi)2

where α is the angle of the vertex vi after smoothing, βi is the angle between the vertex vi and Mp. The total angle is calculated by adding all angles that contain bi. The above equation is only valid for a closed curve. Thus, the start and end points of the boundary curve must coincide and the angle inscribed by the curve is 2π.

Some regions in the face model may not be developable and create a large amount of errors during the smoothing procedure. To reduce the errors, the flattening method needs to be modified to ensure a given deviation between the original and the constructed models. The modification should tolerate a certain degree of non-developability. According to differential geometry, in a discrete expandable mesh structure, the sum of angles that contains a given nonboundary vertex is 2π. Therefore, the total angle of a nonboundary vertex of an approximate expandable surface should be close to 2π. Assume Mp becomes MpF after undergoing the flattening procedure. The above condition can be written as [26] Display Formula

(14)MinJds.t.β(vj)2π

where Jd is the target function that constrains the deformation from Mp to MpF via Display Formula

(15)Jd=jvjvjc2

where vjc is the closest vertex of vj. A linear interpolation of Mp and MpF can be used to replace MpF to control the degree of deformation during the smoothing process by giving a constant value uDisplay Formula

(16)Mpu=1uMp+uMpFandu[0,1]

The mask design process based on the prototyping system is shown in Fig. 12. First, the user constructs curves by specifying the control points directly on a face model using the mouse cursor. Those curves work as boundaries to define the mask region. The system then retriangulates and trims the mesh model according to the boundaries. This is accomplished by removing the meshes in the inner holes, mainly around eyes, nose, and mouse, and the region out of the boundaries. A smoothing procedure is applied to improve the shape continuity by preserving the length of the boundary curves and the angles along the boundary vertices. In order to reduce the shape distortion caused by smoothing, flattening permeable surfaces is conducted to minimize the displacement of mesh vertices during the process. This operation adjusts the mask model to increase its surface expandability before smoothing. The degree of fit between the mask and the wearing face is controlled by the mean square errors in the alignment process. Figure 13 shows the smoothing process through the mask flattening interface. The degree of flattening can be modified through adjusting the u value in Eq. (15) as shown in the figure. The test results show that the higher the degree of flattening, the more expandable the mask model becomes, thus producing a lower degree of fit for the mask design.

Mask Design Results.

Figure 14 presents different cosmetic mask designs generated by the prototyping system implementing the methods described in Secs. 3 and 4. The 2D patterns flattened by the smoothing process are also shown in the figure. The iterative closest points (ICP) method [27] is used to estimate how close a mask design matches its wearing face. We compare the mask shapes created by two design methods based on the estimation results. ICP is an algorithm commonly used to optimally align two sets of points, and has been successfully applied to various applications such as geographical information systems, pattern recognition, computer vision, and robot planning. The two point sets to be aligned do not need to have the same number of points. ICP iteratively adjusts the position of a first point set through minimizing positional deviations with respect to the second point set.

Alternatively stated, ICP determines the best matching between a target model and a source model through 3D transformation in space. In this study, the target is the mask design constructed on the face model approximating by the nonlinear parametric models, while the source is the facial geometry of the user wearing the mask. The deviations between the target and the source are estimated as the mean square errors between the corresponding meshes. The smaller the deviations are, the higher the degree of fit is. Except for point sets completely identical, the deviations between them calculated by ICP do not vanish. Table 3 summarizes the ICP results for the designs generated by the u value as 1, 0.5, and 0, respectively. Recall that u is the linear interpolation coefficient between Mp and MpF, and controls the developability of a 3D shape into a 2D pattern. The four faces shown in Fig. 14 are used as test example denoted as subjects 1–4 in the table. The middle column represents the fit for the masks designed by the parametric models. The right column shows the degree of fit for the masks determined by the so-called average face. The average face is calculated by taking the average of the coordinates of each vertex in all training models. The calculation is possible because they all have the same number of meshes and connectivity. The deviations to the face model thus determined are similar to the design results driven by human head/face sizing. All the masks constructed from the parametric models produce smaller errors. We thus conclude that those customized masks offer a better fit than that of the average face. The conclusion also verifies the effectiveness of the customized mask design proposed by this work.

Cosmetic and skin care products are high-volume consumer goods commonly used by the youth and female. In addition to the nutrition ingredients, the fit of a cosmetic mask to the user's face affects the comfort in mask wearing. Oversized or undersized mask designs may lower consumers' satisfaction. It is advantageous to determine the mask shape directly on individual face models. To capture a 3D human face model usually requires special scanning device not always available on the consumer's side. This paper presented a parametric modeling method of 3D human faces derived from anthropometric data of the Taiwanese female. Parametric models constructed based on the nonlinear Kriging method approximate individual face geometry with a set of facial parameters easily measurable. Test results show that the nonlinear models outperform the linear ones by capturing fine shape variations around highly curved regions of a human face. The facial geometry thus produced serves as an effective design reference to construct customized facial masks. A prototyping design system was implemented to demonstrate the practical values of the proposed parametric modeling method and how to conduct customized mask design. This system allows the user to specify the mask shape directly on 3D meshes of a face model. Boundary curves can be constructed by specifying the curve control points on the meshes. A smoothing procedure was applied to improve the mesh quality of the surface regions controlled by the boundary curves. The system also provides automatic trimming and flattening functions that unfold the mask design into a 2D pattern. In addition, an ICP-based method was used to evaluate the degree of fit between a mask design and the face wearing the mask. The evaluation results indicate that the mask designs constructed on the parametric face models offer a high degree of fit in wearing the mask. Future work can conduct ergonomic evaluations to physically validate the feasibility of the proposed models. This study enhances the practical value of large-scale anthropometric data by realizing human centric mass customization using cosmetic facial mask as an example. It may open up novel applications of parametric face modeling in the fields of esthetic medicine and cosmetic surgery.

  • Ministry of Science and Technology, Taiwan (Grant No. MOST 102-2221-E-007-079-MY2).

Smith, S. , Smith, G. , Jiao, J. , and Chu, C. H. , 2013, “Mass Customization in the Product Life Cycle,” J. Intell. Manuf., 24(5), pp. 877–885. [CrossRef]
Lo, C. H. , Chu, C. H. , and Huang, S. H. , 2015, “Evaluating the Effect of Interactions Between Appearance-Related Product Designs and Facial Characteristics on Social Affectivity,” Int. J. Ind. Ergon., 45, pp. 35–47. [CrossRef]
Wang, I. J. , and Chu, C. H. , 2017, “3D Parametric Human Face Modeling for Mass Customized Design of Cosmetic Masks,” ASME Paper No. DETC2017-68119.
Sanders, M. , and McCormick, E. , 1993, Human Factor in Engineering and Design, Seventh Edition, McGraw-Hill, New York.
Nayak, R. , and Padhye, R. , 2016, “The Use of Laser in Garment Manufacturing: An Overview,” Fashion Text., 5(1), pp. 1–16.
Loker, S. , Ashdown, S. , and Schoenfelder, K. , 2005, “Size-Specific Analysis of Body Scan Data to Improve Apparel Fit,” J. Text. Apparel Technol. Manage., 4(3), pp. 1–15.
Liu, B. S. , 2008, “Incorporating Anthropometry Into Design of Ear-Related Products,” Appl. Ergon., 39(1), pp. 115–121. [CrossRef] [PubMed]
Istook, C. L. , and Hwang, S. J. , 2001, “3D Body Scanning Systems With Application to the Apparel Industry,” J. Fashion Mark. Manage., 5(2), pp. 120–132. [CrossRef]
Bibb, R. , Eggbeer, D. , and Williams, R. , 2006, “Rapid Manufacture of Removable Partial Denture Frameworks,” Rapid Prototyping J., 12(2), pp. 95–99. [CrossRef]
Tognola, G. , Parazzini, M. , Svelto, C. , Galli, M. , Ravazzani, P. , and Grandori, F. , 2004, “Design of Hearing Aid Shells by Three Dimensional Laser Scanning and Mesh Reconstruction,” J. Biomed. Opt., 9(4), pp. 835–843. [CrossRef] [PubMed]
Lacko, D. , Huysmans, T. , Vleugels, J. , De Bruyne, G. , Van Hulle, M. M. , Sijbers, J. , and Verwulgen, S. , 2017, “Product Sizing With 3D Anthropometry and K-Medoids Clustering,” Comput.-Aided Des., 91, pp. 60–74. [CrossRef]
Luximon, Y. , Ball, R. , and Justice, L. , 2012, “The 3D Chinese Head and Face Modeling,” Comput.-Aided Des., 44(1), pp. 40–47. [CrossRef]
Shah, P. B. , and Luximon, Y. , 2017, “Review on 3D Scanners for Head and Face Modeling,” International Conference on Digital Human Modeling and Applications in Health, Safety, Ergonomics and Risk Management, Vancouver, BC, Canada, July 9–14, pp. 47–56.
Ellena, T. , Skals, S. , Subic, A. , Mustafa, H. , and Pang, T. Y. , 2017, “3D Digital Headform Models of Australian Cyclists,” Appl. Ergon., 59(Pt. A), pp. 11–18. [CrossRef] [PubMed]
Huang, S. H. , Yang, C. K. , Tseng, C. Y. , and Chu, C. H. , 2015, “Design Customization of Respiratory Mask Based on 3D Face Anthropometric Data,” Int. J. Precis. Eng. Manuf., 16(3), pp. 487–494. [CrossRef]
Tseng, C. Y. , Wang, I. J. , and Chu, C. H. , 2014, “Parametric Modeling of 3D Human Faces Using Anthropometric Data,” IEEE International Conference on Industrial Engineering and Engineering Management (IEEM), Bandar Sunway, Malaysia, Dec. 9–12, pp. 491–495.
Chu, C. H. , Wang, I. J. , Wang, J. B. , and Luh, Y. P. , 2017, “3D Parametric Human Face Modeling for Personalized Product Design: Eyeglasses Frame Design Case,” Adv. Eng. Inf., 32, pp. 202–223. [CrossRef]
Zhao, W. , Gao, S. , and Lin, H. , 2007, “A Robust Hole-Filling Algorithm for Triangular Mesh,” Visual Comput., 23(12), pp. 987–997. [CrossRef]
Turk, G. , and Levoy, M. , 1994, “Zippered Polygon Meshes From Range Images,” 21st ACM Annual Conference on Computer Graphics and Interactive Techniques, Orlando, FL, July 24–29, pp. 311–318. https://graphics.stanford.edu/papers/zipper/zipper.pdf
Attene, M. , and Falcidieno, B. , 2006, “ReMESH: An Interactive Environment to Edit and Repair Triangle Meshes,” IEEE International Conference on Shape Modeling and Applications (SMI'06), Matsushima, Japan, June 14–16, pp. 41–41.
Floater, M. S. , 2003, “Mean Value Coordinates,” Comput. Aided Geom. Des., 20(1), pp. 19–27. [CrossRef]
Wang, C. C. , Zhang, Y. , and Sheung, H. , 2010, “From Designing Products to Fabricating Them From Planar Materials,” IEEE Comput. Graph. Appl., 30(6), pp. 74–85. [CrossRef] [PubMed]
Schröder, P. , Zorin, D. , DeRose, T. , Forsey, D. R. , Kobbelt, L. , Lounsbery, M. , and Peters, J. , 2000, “Subdivision for Modeling and Animation,” SIGGRAPH 2000 Course Notes.
Chew, L. P. , 1989, “Constrained Delaunay Triangulations,” Algorithmica, 4(1–4), pp. 97–108. [CrossRef]
Wang, C. C. , 2008, “WireWarping: A Fast Surface Flattening Approach With Length-Preserved Feature Curves,” Comput.-Aided Des., 40(3), pp. 381–395. [CrossRef]
Kwok, T. H. , Zhang, Y. , and Wang, C. C. , 2012, “Efficient Optimization of Common Base Domains for Cross Parameterization,” IEEE Trans. Visualization Comput. Graph., 18(10), pp. 1678–1692. [CrossRef]
Besl, P. J. , and McKay, N. D. , 1992, “Method for Registration of 3-D Shapes,” Sensor Fusion IV: Control Paradigms and Data Structures, Boston, MA, Nov. 12–15, pp. 586–607.
Copyright © 2018 by ASME
Topics: Design , Geometry , Shapes , Modeling
View article in PDF format.

References

Smith, S. , Smith, G. , Jiao, J. , and Chu, C. H. , 2013, “Mass Customization in the Product Life Cycle,” J. Intell. Manuf., 24(5), pp. 877–885. [CrossRef]
Lo, C. H. , Chu, C. H. , and Huang, S. H. , 2015, “Evaluating the Effect of Interactions Between Appearance-Related Product Designs and Facial Characteristics on Social Affectivity,” Int. J. Ind. Ergon., 45, pp. 35–47. [CrossRef]
Wang, I. J. , and Chu, C. H. , 2017, “3D Parametric Human Face Modeling for Mass Customized Design of Cosmetic Masks,” ASME Paper No. DETC2017-68119.
Sanders, M. , and McCormick, E. , 1993, Human Factor in Engineering and Design, Seventh Edition, McGraw-Hill, New York.
Nayak, R. , and Padhye, R. , 2016, “The Use of Laser in Garment Manufacturing: An Overview,” Fashion Text., 5(1), pp. 1–16.
Loker, S. , Ashdown, S. , and Schoenfelder, K. , 2005, “Size-Specific Analysis of Body Scan Data to Improve Apparel Fit,” J. Text. Apparel Technol. Manage., 4(3), pp. 1–15.
Liu, B. S. , 2008, “Incorporating Anthropometry Into Design of Ear-Related Products,” Appl. Ergon., 39(1), pp. 115–121. [CrossRef] [PubMed]
Istook, C. L. , and Hwang, S. J. , 2001, “3D Body Scanning Systems With Application to the Apparel Industry,” J. Fashion Mark. Manage., 5(2), pp. 120–132. [CrossRef]
Bibb, R. , Eggbeer, D. , and Williams, R. , 2006, “Rapid Manufacture of Removable Partial Denture Frameworks,” Rapid Prototyping J., 12(2), pp. 95–99. [CrossRef]
Tognola, G. , Parazzini, M. , Svelto, C. , Galli, M. , Ravazzani, P. , and Grandori, F. , 2004, “Design of Hearing Aid Shells by Three Dimensional Laser Scanning and Mesh Reconstruction,” J. Biomed. Opt., 9(4), pp. 835–843. [CrossRef] [PubMed]
Lacko, D. , Huysmans, T. , Vleugels, J. , De Bruyne, G. , Van Hulle, M. M. , Sijbers, J. , and Verwulgen, S. , 2017, “Product Sizing With 3D Anthropometry and K-Medoids Clustering,” Comput.-Aided Des., 91, pp. 60–74. [CrossRef]
Luximon, Y. , Ball, R. , and Justice, L. , 2012, “The 3D Chinese Head and Face Modeling,” Comput.-Aided Des., 44(1), pp. 40–47. [CrossRef]
Shah, P. B. , and Luximon, Y. , 2017, “Review on 3D Scanners for Head and Face Modeling,” International Conference on Digital Human Modeling and Applications in Health, Safety, Ergonomics and Risk Management, Vancouver, BC, Canada, July 9–14, pp. 47–56.
Ellena, T. , Skals, S. , Subic, A. , Mustafa, H. , and Pang, T. Y. , 2017, “3D Digital Headform Models of Australian Cyclists,” Appl. Ergon., 59(Pt. A), pp. 11–18. [CrossRef] [PubMed]
Huang, S. H. , Yang, C. K. , Tseng, C. Y. , and Chu, C. H. , 2015, “Design Customization of Respiratory Mask Based on 3D Face Anthropometric Data,” Int. J. Precis. Eng. Manuf., 16(3), pp. 487–494. [CrossRef]
Tseng, C. Y. , Wang, I. J. , and Chu, C. H. , 2014, “Parametric Modeling of 3D Human Faces Using Anthropometric Data,” IEEE International Conference on Industrial Engineering and Engineering Management (IEEM), Bandar Sunway, Malaysia, Dec. 9–12, pp. 491–495.
Chu, C. H. , Wang, I. J. , Wang, J. B. , and Luh, Y. P. , 2017, “3D Parametric Human Face Modeling for Personalized Product Design: Eyeglasses Frame Design Case,” Adv. Eng. Inf., 32, pp. 202–223. [CrossRef]
Zhao, W. , Gao, S. , and Lin, H. , 2007, “A Robust Hole-Filling Algorithm for Triangular Mesh,” Visual Comput., 23(12), pp. 987–997. [CrossRef]
Turk, G. , and Levoy, M. , 1994, “Zippered Polygon Meshes From Range Images,” 21st ACM Annual Conference on Computer Graphics and Interactive Techniques, Orlando, FL, July 24–29, pp. 311–318. https://graphics.stanford.edu/papers/zipper/zipper.pdf
Attene, M. , and Falcidieno, B. , 2006, “ReMESH: An Interactive Environment to Edit and Repair Triangle Meshes,” IEEE International Conference on Shape Modeling and Applications (SMI'06), Matsushima, Japan, June 14–16, pp. 41–41.
Floater, M. S. , 2003, “Mean Value Coordinates,” Comput. Aided Geom. Des., 20(1), pp. 19–27. [CrossRef]
Wang, C. C. , Zhang, Y. , and Sheung, H. , 2010, “From Designing Products to Fabricating Them From Planar Materials,” IEEE Comput. Graph. Appl., 30(6), pp. 74–85. [CrossRef] [PubMed]
Schröder, P. , Zorin, D. , DeRose, T. , Forsey, D. R. , Kobbelt, L. , Lounsbery, M. , and Peters, J. , 2000, “Subdivision for Modeling and Animation,” SIGGRAPH 2000 Course Notes.
Chew, L. P. , 1989, “Constrained Delaunay Triangulations,” Algorithmica, 4(1–4), pp. 97–108. [CrossRef]
Wang, C. C. , 2008, “WireWarping: A Fast Surface Flattening Approach With Length-Preserved Feature Curves,” Comput.-Aided Des., 40(3), pp. 381–395. [CrossRef]
Kwok, T. H. , Zhang, Y. , and Wang, C. C. , 2012, “Efficient Optimization of Common Base Domains for Cross Parameterization,” IEEE Trans. Visualization Comput. Graph., 18(10), pp. 1678–1692. [CrossRef]
Besl, P. J. , and McKay, N. D. , 1992, “Method for Registration of 3-D Shapes,” Sensor Fusion IV: Control Paradigms and Data Structures, Boston, MA, Nov. 12–15, pp. 586–607.

Figures

Grahic Jump Location
Fig. 1

Capturing 3D face data using two noncontact scanners

Grahic Jump Location
Fig. 2

Marking landmarks in the image seen from a scanner

Grahic Jump Location
Fig. 3

Generation of textured 3D data from the face scanner

Grahic Jump Location
Fig. 4

Preprocessing and postprocessing steps of 3D face models

Grahic Jump Location
Fig. 5

(a) Typical raw data captured by two scanners from different angles and (b) three landmarks for merging the data captured from different angles

Grahic Jump Location
Fig. 6

Face models created by merging two scanned images

Grahic Jump Location
Fig. 7

Generating the region of interest with a sphere constructed from four feature points

Grahic Jump Location
Fig. 8

The smoothing results of two face models (left: before; right: after)

Grahic Jump Location
Fig. 9

Human face features related to the parametric face models

Grahic Jump Location
Fig. 10

Approximate face geometry generated by linear and nonlinear parametric models

Grahic Jump Location
Fig. 11

Construction steps of cosmetic facial mask from a face model

Grahic Jump Location
Fig. 12

Constructing the mask shape by specifying inner and outer boundary curves in the prototyping system

Grahic Jump Location
Fig. 13

Mask smoothing through the flattening interface

Grahic Jump Location
Fig. 14

Mask design results for different human faces

Tables

Table Grahic Jump Location
Table 1 Parameters used in the parametric face models
Table Grahic Jump Location
Table 2 Comparison of the 12 parameter values between the original face and the results produced by two different parametric models (L01–L08: mm; R09–R12: radian)
Table Grahic Jump Location
Table 3 The degrees of fit for different designs

Errata

Some tools below are only available to our subscribers or users with an online account.

Related Content

Customize your page view by dragging and repositioning the boxes below.

Related Journal Articles
Related eBook Content
Topic Collections

Sorry! You do not have access to this content. For assistance or to subscribe, please contact us:

  • TELEPHONE: 1-800-843-2763 (Toll-free in the USA)
  • EMAIL: asmedigitalcollection@asme.org
Sign In