Research Papers

Application of Feature-Learning Methods Toward Product Usage Context Identification and Comfort Prediction

[+] Author and Article Information
Dipanjan Ghosh

Department of Mechanical and
Aerospace Engineering,
805 Furnas Hall,
University at Buffalo—SUNY,
Buffalo, NY 14260
e-mail: dipanjan@buffalo.edu

Andrew Olewnik

Department of Mechanical and
Aerospace Engineering,
412 Bonner Hall,
University at Buffalo—SUNY,
Buffalo, NY 14260
e-mail: olewnik@buffalo.edu

Kemper Lewis

Fellow ASME
Department of Mechanical and
Aerospace Engineering,
208 Bell Hall,
University at Buffalo—SUNY,
Buffalo, NY 14260
e-mail: kelewis@buffalo.edu

1Corresponding author.

Contributed by the Computers and Information Division of ASME for publication in the JOURNAL OF COMPUTING AND INFORMATION SCIENCE IN ENGINEERING. Manuscript received October 28, 2016; final manuscript received July 11, 2017; published online November 28, 2017. Assoc. Editor: Monica Bordegoni.

J. Comput. Inf. Sci. Eng 18(1), 011004 (Nov 28, 2017) (10 pages) Paper No: JCISE-16-2118; doi: 10.1115/1.4037435 History: Received October 28, 2016; Revised July 11, 2017

Usage context is considered a critical driving factor for customers' product choices. In addition, physical use of a product (i.e., user-product interaction) dictates a number of customer perceptions (e.g., level of comfort). In the emerging internet of things (IoT), this work hypothesizes that it is possible to understand product usage and level of comfort while it is “in-use” by capturing the user-product interaction data. Mining this data to understand both the usage context and the comfort of the user adds new capabilities to product design. There has been tremendous progress in the field of data analytics, but the application in product design is still nascent. In this work, application of feature-learning methods for the identification of product usage context and level of comfort is demonstrated, where usage context is limited to the activity of the user. A novel generic architecture using foundations in convolutional neural network (CNN) is developed and applied to a walking activity classification using smartphone accelerometer data. Results are compared with feature-based machine learning algorithms (neural network and support vector machines (SVM)) and demonstrate the benefits of using the feature-learning methods over the feature-based machine-learning algorithms. To demonstrate the generic nature of the architecture, an application toward comfort level prediction is presented using force sensor data from a sensor-integrated shoe.

Copyright © 2018 by ASME
Your Session has timed out. Please sign back in to continue.


Dickson, P. R. , 1982, “ Person-Situation: Segmentation's Missing Link,” J. Mark., 46(4), pp. 56–64. [CrossRef]
Belk, R. W. , 1974, “ An Exploratory Assessment of Situational Effects in Buyer Behavior,” J. Mark. Res., 11(2), pp. 156–163. [CrossRef]
De la Fuente, J. R. , and Guillen, M. J. Y. , 2005, “ Identifying the Influence of Product Design and Usage Situation on Consumer Choice,” Int. J. Mark. Res., 47(6), pp. 667–686. https://www.mrs.org.uk/ijmr_article/article/80992
Van Horn, D. , and Lewis, K. , 2015, “ The Use of Analytics in the Design of Sociotechnical Products,” Artif. Intell. Eng. Des. Anal. Manuf., 29(01), pp. 65–81. [CrossRef]
He, L. , Chen, W. , Hoyle, C. , and Yannou, B. , 2012, “ Choice Modeling for Usage Context-Based Design,” ASME J. Mech. Des., 134(3), p. 031007. [CrossRef]
Louviere, J. J. , Hensher, D. , Swait, J. , and Adamowicz, W. , 2000, Stated Choice Methods, Cambridge University Press, Cambridge, UK. [CrossRef]
Klayman, J. , and Ha, Y. , 1987, “ Confirmation, Disconfirmation, and Information in Hypothesis Testing,” Psychol. Rev., 94(2), pp. 211–228. [CrossRef]
Burns, A. , and Evans, S. , 2001, “ Empathic Design: A New Approach for Understanding and Delighting Customers,” Int. J. New Prod. Dev. Innovation Manage., 3(4), pp. 313–327.
Lin, J. , and Seepersad, C. C. , 2007, “ Empathic Lead Users: The Effects of Extraordinary User Experiences on Customer Needs Analysis and Product Redesign,” ASME Paper No. DETC2007-35302.
Ghosh, D. , Kim, J. , Olewnik, A. , Lakshmanan, A. , and Lewis, K. , 2016, “ Cyber-Empathic Design—A Data Driven Framework for Product Design,” ASME Paper No. DETC2016-59642.
Ravi, N. , Dandekar, N. , Mysore, P. , and Littman, M. L. , 2005, “ Activity Recognition From Accelerometer Data,” 17th Conference on Innovative Applications of Artificial Intelligence (IAAI), Pittsburgh, PA, July 9–13, pp. 1541–1546. https://pdfs.semanticscholar.org/20cb/9de9921d7efbc1add2848239d7916bf158b2.pdf
Anguita, D. , Ghio, A. , Oneto, L. , Parra, X. , and Reyes-Ortiz, J. L. , 2012, “ Human Activity Recognition on Smartphones Using a Multiclass Hardware-Friendly Support Vector Machine,” Ambient Assisted Living and Home Care, J. Bravo , R. Hervás , and M. Rodríguez , eds., Springer, Berlin, pp. 216–223. [CrossRef]
Mannini, A. , and Sabatini, A. M. , 2010, “ Machine Learning Methods for Classifying Human Physical Activity From On-Body Accelerometers,” Sensors, 10(2), pp. 1154–1175. [CrossRef] [PubMed]
Kwapisz, J. R. , Weiss, G. M. , and Moore, S. A. , 2011, “ Activity Recognition Using Cell Phone Accelerometers,” ACM SigKDD Explor. Newsl., 12(2), pp. 74–82. [CrossRef]
Bao, L. , and Intille, S. S. , 2004, “ Activity Recognition From User-Annotated Acceleration Data,” Pervasive Computing, Springer-Verlag, Berlin, pp. 1–17. [CrossRef]
Tapia, E. M. , 2008, “ Using Machine Learning for Real-Time Activity Recognition and Estimation of Energy Expenditure,” Ph.D. dissertation, Massachusetts Institute of Technology, Cambridge, MA. https://dspace.mit.edu/handle/1721.1/44913
LeCun, Y. , and Bengio, Y. , 1995, “ Convolutional Networks for Images, Speech, and Time Series,” The Handbook of Brain Theory and Neural Networks, M. A. Arbib, ed., MIT Press, Cambridge, MA.
Krizhevsky, A. , Sutskever, I. , and Hinton, G. E. , 2012, “ ImageNet Classification With Deep Convolutional Neural Networks,” Advances in Neural Information Processing Systems, Curran Associates, Red Hook, NY, pp. 1097–1105. [CrossRef]
Längkvist, M. , Karlsson, L. , and Loutfi, A. , 2014, “ A Review of Unsupervised Feature Learning and Deep Learning for Time-Series Modeling,” Pattern Recognit. Lett., 42, pp. 11–24. [CrossRef]
Kotsiantis, S. B. , Zaharakis, I. D. , and Pintelas, P. E. , 2006, “ Machine Learning: A Review of Classification and Combining Techniques,” Artif. Intell. Rev., 26(3), pp. 159–190. [CrossRef]
Bengio, Y. , Courville, A. , and Vincent, P. , 2013, “ Representation Learning: A Review and New Perspectives,” IEEE Trans. Pattern Anal. Mach. Intell., 35(8), pp. 1798–1828. [CrossRef] [PubMed]
Auer, P. , Burgsteiner, H. , and Maass, W. , 2008, “ A Learning Rule for Very Simple Universal Approximators Consisting of a Single Layer of Perceptrons,” Neural Networks, 21(5), pp. 786–795. [CrossRef] [PubMed]
Huang, G.-B. , Wang, D. H. , and Lan, Y. , 2011, “ Extreme Learning Machines: A Survey,” Int. J. Mach. Learn. Cybern., 2(2), pp. 107–122. [CrossRef]
Cristianini, N. , and Shawe-Taylor, J. , 2000, An Introduction to Support Vector Machines and Other Kernel-Based Learning Methods, Cambridge University Press, Cambridge, UK. [PubMed] [PubMed]
Maurer, U. , Smailagic, A. , Siewiorek, D. P. , and Deisher, M. , 2006, “ Activity Recognition and Monitoring Using Multiple Sensors on Different Body Positions,” International Workshop on Wearable and Implantable Body Sensor Networks (BSN), Cambridge, MA, Apr. 3–5, pp. 113–116.
Cireşan, D. C. , Meier, U. , Gambardella, L. M. , and Schmidhuber, J. , 2000, “ Deep, Big, Simple Neural Nets for Handwritten Digit Recognition,” Neural Comput., 22(12), pp. 3207–3220.
Nielsen, M. A. , 2015, “ Neural Networks and Deep Learning,” Determination Press, accessed, Sept. 25, 2015, http://neuralnetworksanddeeplearning.com/
Haykin, S. , 1998, Neural Networks: A Comprehensive Foundation, 2nd ed., Prentice Hall, Upper Saddle River, NJ.
Schmidhuber, J. , 2015, “ Deep Learning in Neural Networks: An Overview,” Neural Networks, 61, pp. 85–117. [CrossRef] [PubMed]
Lawrence, S. , Giles, C. L. , Tsoi, A. C. , and Back, A. D. , 1997, “ Face Recognition: A Convolutional Neural-Network Approach,” IEEE Trans. Neural Networks, 8(1), pp. 98–113. [CrossRef]
Levine, S. , Finn, C. , Darrell, T. , and Abbeel, P. , 2016, “ End-To-End Training of Deep Visuomotor Policies,” J. Mach. Learn. Res., 17(39), pp. 1–40.
He, K. , Zhang, X. , Ren, S. , and Sun, J. , 2015, “ Deep Residual Learning for Image Recognition,” Preprint arXiv:1512.03385. https://arxiv.org/abs/1512.03385
Bottou, L. , 2012, “ Stochastic Gradient Tricks,” Neural Networks: Tricks of the Trade, Montavon, G. , Orr, G. B. , and Müller, K. R. , eds., Springer, Berlin, pp. 430–435.
LeCun, Y. , Bottou, L. , Orr, G. B. , and Müller, K. R. , 1998, “ Efficient BackProp,” Neural Networks: Tricks of the Trade, G. B. Orr and K. R. Müller , eds., Springer, Berlin, pp. 9–50.
Bengio, Y. , 2012, “ Practical Recommendations for Gradient-Based Training of Deep Architectures,” Neural Networks: Tricks of the Trade, Montavon, G. , G. B. Orr , and K. R. Müller , eds., Springer, Berlin, pp. 437–478. [CrossRef]
Sutskever, I. , Martens, J. , Dahl, G. E. , and Hinton, G. E. , 2013, “ On The Importance of Initialization and Momentum in Deep Learning.,” Int. Conf. Mach. Learn., 28(3), pp. 1139–1147. http://proceedings.mlr.press/v28/sutskever13.html
Hadgu, A. T. , Nigam, A. , and Diaz-Aviles, E. , 2015, “ Large-Scale Learning With ADAGRAD on Spark,” IEEE International Conference on Big Data (Big Data), Santa Clara, CA, Oct. 29–Nov. 1, pp. 2828–2830.
Kingma, D. , and Ba, J. , 2014, “ ADAM: A Method for Stochastic Optimization,” Preprint arXiv:1412.6980. https://arxiv.org/abs/1412.6980
Anguita, D. , Ghio, A. , Oneto, L. , Parra, X. , and Reyes-Ortiz, J. L. , 2013, “ A Public Domain Dataset for Human Activity Recognition Using Smartphones,” 21st European Symposium on Artificial Neural Networks, Bruges, Belgium, Apr. 24–26, pp. 437–442.
Bergstra, J. S. , Bardenet, R. , Bengio, Y. , and Kégl, B. , 2011, “ Algorithms for Hyper-Parameter Optimization,” Advances in Neural Information Processing Systems, Granada, Spain, Dec. 12–14, pp. 2546–2554.
Refaeilzadeh, P. , Tang, L. , and Liu, H. , 2009, “ Cross-Validation,” Encyclopedia of Database Systems, Springer, New York, pp. 532–538. [CrossRef] [PubMed] [PubMed]
Luštrek, M. , and Kaluža, B. , 2009, “ Fall Detection and Activity Recognition With Machine Learning,” Informatica, 33(2), pp. 197–204. http://www.informatica.si/index.php/informatica/article/view/238/235
Bottou, L. , 2010, “ Large Scale Machine Learning With Stochastic Gradient Descent,” International Conference on Computational Statistics (COMPSTAT), Paris, France, Aug. 22–27, pp. 177–186. https://www.rocq.inria.fr/axis/COMPSTAT2010/slides/slides_17.pdf
Scholkopf, B. , Sung, K.-K. , Burges, C. J. , Girosi, F. , Niyogi, P. , Poggio, T. , and Vapnik, V. , 1997, “ Comparing Support Vector Machines With Gaussian Kernels to Radial Basis Function Classifiers,” IEEE Trans. Signal Process., 45(11), pp. 2758–2765. [CrossRef]
Montgomery, D. C. , 2012, Design and Analysis of Experiments, 8th ed., Wiley, Hoboken, NJ.
Anderson, J. C. , and Gerbing, D. W. , 1988, “ Structural Equation Modeling in Practice: A Review and Recommended Two-Step Approach,” Psychol. Bull., 103(3), pp. 411–423. [CrossRef]


Grahic Jump Location
Fig. 2

Traditional and proposed machine learning methodology

Grahic Jump Location
Fig. 1

Relationship of usage context with product preference

Grahic Jump Location
Fig. 9

Proposed CNN architecture for multisource signals

Grahic Jump Location
Fig. 3

Machine learning algorithms learning style classification

Grahic Jump Location
Fig. 4

SVM illustrations [24]

Grahic Jump Location
Fig. 5

Simple neuron model

Grahic Jump Location
Fig. 6

Simple neuron model (with input vector)

Grahic Jump Location
Fig. 7

Multilayer neural network

Grahic Jump Location
Fig. 8

CNN illustrations (for images)

Grahic Jump Location
Fig. 12

Confusion matrices: (a) NN, (b) MC-SVM, and (c) CNN

Grahic Jump Location
Fig. 13

Foot area2, sensor integrated shoe prototype and sensor layout

Grahic Jump Location
Fig. 10

Model training and selection procedure

Grahic Jump Location
Fig. 11

Test dataset accuracy comparison—activity recognition

Grahic Jump Location
Fig. 14

Test dataset accuracy comparison—comfort rating estimation

Grahic Jump Location
Fig. 15

Conceptual integration of usage context with CED framework



Some tools below are only available to our subscribers or users with an online account.

Related Content

Customize your page view by dragging and repositioning the boxes below.

Related Journal Articles
Related eBook Content
Topic Collections

Sorry! You do not have access to this content. For assistance or to subscribe, please contact us:

  • TELEPHONE: 1-800-843-2763 (Toll-free in the USA)
  • EMAIL: asmedigitalcollection@asme.org
Sign In