Summary: Facial Motion Example, What is it?, What can it do?
Biometric-based identity verification techniques are becoming popular in daily life for a variety of authentication methods. Human biometric identifiers like fingerprints, retina scans, and 2D or 3D facial features use the intrinsic characteristics of subjects. These methods are more nonvolatile and convenient than conventional approaches like pincode or password authentication. However, these popular techniques still cannot prevent passive authentication or provide liveness assurance. We propose a unique approach that uses facial motion to aid in identity authentication to address these challenges. Our idea is to analyze both the user's unique facial features and user-selected secret facial motions to verify the user. It requires only a short video of the frontal face expressing a customized facial motion.
8. Z. Sun,S.A. Torrie, A.W. Sumsion, and D. J. Lee, "Self-Supervised Facial Motion Representation Learning via Contrastive Subclips, " Electronics, vol. 12(6), 1369-10 pages, March 2023.
7. A.W. Sumsion, S.A. Torrie, Z. Sun, and D.J. Lee, “Comparing the Transfer of Identity Across a Racial Transformation”, SPIE Electronic Imaging, Intelligent Robotics and Industrial Applications using Computer Vision, San Francisco, CA, USA, 6 Pages, January 15-17, 2023.
6. S.A. Torrie, A.W. Sumsion, Z. Sun, and D.J. Lee, “Automated Dataset Collection Pipeline for Lip Motion Authentication”, SPIE Electronic Imaging, Intelligent Robotics and Industrial Applications using Computer Vision, San Francisco, CA, USA, 6 Pages, January 15-17, 2023.
5. Z. Sun, A.W. Sumsion, S.A. Torrie, and D. J. Lee, "Learning Facial Motion Representation with a Lightweight Encoder for Identity Verification, "Electronics, vol. 11(13), pp. 1946-14 pages, June 2022.
4. S.A. Torrie, A.W. Sumsion, Z. Sun, and D.J. Lee, “Facial Password Data Augmentation”, Intermountain Engineering, Technology, and Computing Conference (i-ETC), Orem, Utah, USA, May 13-14, 2022.
3. Z. Sun, A.W. Sumsion, S.A. Torrie, and D.J. Lee, “Learn Dynamic Facial Motion Representations Using Transformer Encoder”, Intermountain Engineering, Technology, and Computing Conference (i-ETC), Orem, Utah, USA, May 13-14, 2022.
2. Z. Sun, D. J. Lee, D. Zhang, W.C. Chen, and X. Li, “Preliminary Study on Using Facial Motions for Identity Verification”, International Conference on Image Processing, Computer Vision, & Pattern Recognition, Las Vegas, NV, UAS, July 26-29, 2021.
1. Z. Sun, D. J. Lee, D. Zhang, and X. Li, “Concurrent Two-factor Identity Verification Using Facial Identity and Facial Actions,”, SPIE Electronic Imaging, Intelligent Robotics and Industrial Applications using Computer Vision, Virtual, No. 318, 6 Page, January 11, 2021.
The field of computer vision has come a long way in solving the problem of image classification. Not too long ago, handcrafted convolutional kernels were a staple of all computer vision algorithms. With the advent of Convolutional Neural Networks (CNNs), however, handcrafted features have become the exception rather than the rule, and for good reason. CNNs have taken the field of computer vision to new heights by solving problems that used to be unapproachable or unthinkable. Robotic Vision Lab focuses on using CNNs to solve real-world problems beyond object recognition and image classification. Another research focus is the optimization and hardware implementation of CNNs for real-time embedded applications.
15. X.Z. Wang, D. Zhang, H.Z. Tan, and D.J. Lee, "A Self-Fusion Network Based on Contrastive Learning for Group Emotion Recognition, " IEEE Transactions on Computational Social Systems, vol. 10(2), 458-469, April 2023.
14. J.Y. Zhang, X.Z. Wang, D. Zhang, and D.J. Lee, "Semi-supervised Group Emotion Recognition Based on Contrastive Learning, " Electronics, vol. 11(23), 3990-16 pages, December 2022.
13. Z.S. Zhu, D. Zhang, C.L. Chi, M. Li, and D.J. Lee, "A Complementary Dual-branch Network for Appearance-based Gaze Estimation from Low-resolution Facial Images, " IEEE Transactions on Cognitive and Developmental Systems, (Online 09/28/22).
12. C.L. Chi, D. Zhang, Z.S. Zhu, X.Z. Wang, and D.J. Lee, "Human Pose Estimation for Low-resolution Image Using 1-D Heatmaps and Offset Regression, " Multimedia Tools and Applications, vol. 82(4), pp. 6289–6307, August 2022.
11. J.C. Hsu, F.H. Wu, H.H. Lin, D.J. Lee, Y.F. Chen, and C.S. Lin, "Models for Predicting Readmission of Pneumonia Patients after Discharge", Electronics, vol. 11(5), pp. 673-22 pages, February 2022.
10. X. Li, D. Zhang, M. Li, and D.J. Lee, "Using Image Rectification and Lightweight Convolutional Neural Network for Accurate Head Pose Estimation", IEEE Transactions on Multimedia, (Online on 01/26/2022).
9. W. Zou, D. Zhang, and D.J. Lee,, "A New Multi-feature Fusion based Convolution Neural Network for Facial Expression Recognition", Applied Intelligence, vol. 52(3), pp. 2918-2929, February 2022.
8. J.N. Teng, D. Zhang, W. Zou, M. Li, and D.J. Lee,, "Typical Facial Expression Network Using Facial Feature Decoupler and Spatial-Temporal Learning, IEEE Transactions on Affective Computing, (Online on 08/07/2021).
7. T.S. Simons and D.J. Lee, "Efficient Binarized Convolutional Layers for Visual Inspection Applications on Resource Limited FPGAs and ASICs", Electronics, vol. 10(13), pp. 1511-16 pages, June 2021.
6. W.C. Chen, D. Zhang, M. Li, and D.J. Lee, "STCAM: Spatial-Temporal and Channel Attention Module for Dynamic Facial Expression Recognition", IEEE Transactions on Affective Computing, vol. 14(1), pp. 800–810, January 2023.
5. Y.Y. Li, D. Zhang, and D. J. Lee , "Automatic Fabric Defect Detection with a Wide-And-Compact Network", Journal of Neurocomputing, vol. 329, pp. 329-338, February 2019.
4. Y.Y. Li, D. Zhang, and D. J. Lee, "IIRNet: A Lightweight Deep Neural Network Using Intensely Inverted Residuals for Image Recognition", Image and Vision Computing, vol. 92, Article# 103819, December 2019.
3. T.S. Simons and D.J. Lee, "A Review of Binarized Neural Networks", Electronics, vol. 8(6), pp. 661-25 pages, June 2019.
2. T.S. Simons and D.J. Lee, "Jet Features: Hardware-Friendly, Learned Convolutional Kernels for High-Speed Image Classification", Electronics, vol. 8(5), pp. 588-20 pages, May 2019.
1. J.N. Teng, D. Zhang, D.J. Lee, "Recognition of Chinese Food Using Convolutional Neural Network", Multimedia Tools and Applications, vol. 78(9), pp. 11155-11172, May 2019.
Annotation and analysis of sports videos is a challenging task that, once accomplished, could provide various benefits to coaches, players, and spectators. In particular, American Football could benefit from such a system to provide assistance in statistics and game strategy analysis. Manual analysis of recorded American football game videos is a tedious and inefficient process. As a first step of our research for this unique application, we focus on locating and labeling individual football players from a single overhead image of a football play immediately before the play begins. A pre-trained deep learning network is used to detect and locate the players in the image. A ResNet is used to label the individual players based on their corresponding player position or formation. Our player detection and labeling algorithms obtain greater than 90% accuracy, especially for those skill positions on offense (Quarterback, Running Back, and Wide Receiver) and defense (Cornerback and Safety). Results from our preliminary studies on player detection, localization, and labeling prove the feasibility of building a complete American football strategy analysis system using artificial intelligence.
2. J.D. Newman, A.W. Sumsion, S.A. Torrie, and D.J. Lee, " Play Analysis of American Football Formations Using Deep Learning, " Electronics, vol. 12(3), 726-30 pages, February 2023. (SCIE)
1. J.D. Newman, J.W. Lin, D. J. Lee, and J.J. Liu, “Automatic Annotation of American Football Video Footages for Game Strategy Analysis”, SPIE Electronic Imaging, Intelligent Robotics and Industrial Applications using Computer Vision, Virtual, No. 308, 6 Pages, January 11, 2021.
A golf swing requires full-body coordination and much practice to perform the complex motion precisely and consistently. The force from the golfer’s full-body movement on the club and the trajectory of the swing are the main determinants of swing quality. In this research, we introduce a unique motion analysis method to evaluate the quality of a golf swing. The primary goal is to evaluate how close the user’s swing is to a reference swing. We use 17 skeleton points to evaluate the resemblance and report a notation that incorrectly indicates body movement. This evaluation result can be used as real-time feedback to improve player performance. Using this real-time feedback system repeatedly, the player will be able to train their muscle memory to improve their swing consistency. We created our data-set from a professional golf instructor including good and bad swings. Our result demonstrates that such a machine learning-based approach is feasible and has great potential to be adopted as a low-cost but efficient tool to improve swing quality and consistency. This technology can be applied to other sports for performance evaluation.
1. J.J. Liu, J.D. Newman, and D.J. Lee, “Using Artificial Intelligence to Provide Visual Feedback for Golf Swing Training”, SPIE Electronic Imaging, Intelligent Robotics and Industrial Applications using Computer Vision, Virtual, No. 321, 6 Pages,January 11, 2021.
2. J.J. Liu, J.D. Newman, and D.J. Lee, “Body Motion Analysis for Golf Swing Evaluation”, International Symposium on Visual Computing (ISVC), LNCS 12509, pp. 566–577, Virtual, October 5-7, 2020.
Object recognition is a well studied but extremely challenging field. We developed a novel approach to feature construction for object detection called Evolution-COnstructed Features (ECO features) in 2010. ECO features are automatically constructed by uniquely employing a standard genetic algorithm to discover multiple series of transforms that are highly discriminative. We have successfully applied this algorithm to many visual inspection applications including automated apple stem and calyx detection, shrimp shape quality grading, fish species recognition, invasive carp removal, fruit and food quality grading, and road condition and pavement quality evaluation. Demo Video. BYU Fish Dataset
10. J. Chai, D.J. Lee, B.J. Tippetts, and K.D. Lillywhite"Implementation of An Award-Winning Invasive Fish Recognition and Separation System", Electronics, vol. 10(17), pp. 2182-13 pages, September 2021.
9. Z.H. Guo, M. Zhang, D.J. Lee, and T.S. Simons, "Smart Camera for Quality Inspection and Grading of Food Products", Electronics, vol. 9(3), pp. 505-18 pages, March 2020.
8. Z.H. Guo, M. Zhang, and D.J. Lee, "Efficient Evolutionary Learning Algorithm for Real-Time Embedded Vision Applications", Electronics, vol. 8(11), pp.1367-18 pages, November 2019.
7. M. Zhang, D.J. Lee, K.D. Lillywhite, and B.J. Tippetts, "Automatic Quality and Moisture Evaluations Using Evolution Constructed Features,” Computers and Electronics in Agriculture, vol. 135, p. 321-327, April 2017.
6. D. Zhang, D.J. Lee, M. Zhang. B.J. Tippetts, and K.D. Lillywhite, "Object Recognition Algorithm for the Automatic Identification and Removal of Invasive Fish,” Biosystems Engineering, vol. 145, p. 65-75, May 2016.
5. D. Zhang, K.D. Lillywhite, D.J. Lee, and B.J. Tippetts, "Automated Fish Taxonomy using Evolution-COnstructed Features for Invasive Species Removal”, Pattern Analysis and Applications, vol. 18/2. p. 451-459, May 2015.
4. D. Zhang, K.D. Lillywhite, D.J. Lee, and B.J. Tippetts, "Automatic Shrimp Shape Grading Using Evolution Constructed Features”, Computers and Electronics in Agriculture, vol. 100, p. 116-122, January 2014.
3. K.D. Lillywhite, D.J. Lee, B.J. Tippetts, and J.K Archibald, "A Feature Construction Method for General Object Recognition”, Pattern Recognition, vol. 46/12, p. 3300-3314, December 2013.
2. D. Zhang, K.D. Lillywhite, D.J. Lee, and B.J. Tippetts, "Automated Apple Stem End and Calyx Detection using Evolution-COnstructed Features,” Journal of Food Engineering, vol. 119/3, p. 411-418, December 2013.
1. K.D. Lillywhite, B.J. Tippetts, and D.J. Lee, "Self-Tuned Evolution-COnstructed Features for General Object Recognition”, Pattern Recognition, vol. 45/1, p. 241-251, January 2012.
Vertebra shape can effectively describe various pathologies found in spine x-ray images. Some critical regions on the shape contour help determine whether the shape is pathologic or normal. We developed a technique to automatically select nine points from the boundary contour, represent vertebra shape using multiple open triangles, and relevance feedback to retrieve vertebra X-ray images.
4. D.J. Lee, S.K. Antani, Y.C. Chang, K. Gledhill, L.R. Long, and P. Christensen, "CBIR of Spine X-ray Images on Inter-vertebral Disc Space and Shape Profiles”, special issue on "Knowledge Discovery in Medicine", Data & Knowledge Engineering Journal, vol. 68/12, p. 1359-1369, December 2009.
3. X.Q. Xu , D.J. Lee, S.K. Antani, L.R. Long, and J. K Archibald, " Using Relevance Feedback with Short-term Memory for Content-based Spine X-ray Image Retrieval”, Journal of Neurocomputing , vol. 72/10-12, p. 2259-2269, June 2009.
2. X.Q. Xu , D.J. Lee, S.K. Antani, and L.R. Long, "A Spine X-ray Image Retrieval System Using Partial Shape Matching”, IEEE Transactions on Information Technology in Biomedicine, vol. 12/1, p. 100-108, January 2008.
1. S.K. Antani, D.J. Lee, L.R. Long, and G.R. Thoma, "Evaluation of Shape Similarity Measurement Methods for Spine X-Ray Images”, Special issue on "Multimedia Database Management Systems" of the Journal of Visual Communication and Image Representation, vol. 15/3, p. 285-302, September 2004.