Robotic Vision
Videos
Summary:
Named in honor of its architect, professor DJ Lee, the city of "Leehi" was constructed for one purpose: teaching students the technology (and vocabulary) behind self-driving cars. Leehi includes roads, a fire station, a bakery, miniature people, and in true local fashion, a Costco. It has streets just wide enough for self-driving cars, along with intersections, stop signs, construction zones and traffic lights. But this small town also has a pseudo-GPS coordinate system and the cars that cruise through its streets are not simply RC cars.
Publications:
2. A.W. Sumsion, S.A. Torrie, J.V. Broekhuijsen, and D.J. Lee, “Neural Network Self Driving Car: A Platform for Learning and Research on a Reduced Scale”, Intermountain Engineering, Technology, and Computing Conference (i-ETC), Orem, Utah, USA, May 12, 2023.
1. J.D. Newman, Z. Sun, and D.J. Lee, “Self-Driving Cars: A Platform for Learning and Research”, Intermountain Engineering, Technology, and Computing Conference (i-ETC), Provo, UT, USA, October, 2-3, 2020. Student Engineering Paper Award (second Place).
Videos
Summary:
Computer vision applications often involve computationally-intensive tasks such as target tracking, object identification, image rectification, localization, pose estimation, optical flow and many more. The initial steps of all these applications are the detection, description, and matching of high quality feature points. SYnthetic BAsis (SYBA) was developed for feature point description and matching. It is a compact and efficient binary descriptor that performs a number of similarity tests between a feature image region and a selected number of synthetic basis images and uses their similarity test results as the feature descriptors. It has been successfully applied to visual odometry, UAV ground target tracking, motion estimation and analysis, and soccer game analysis.
Journal Publications:
4. D. Zhang, A. Desai, and D.J. Lee , "Using Synthetic Basis Feature Descriptor for Motion Estimation", International Journal of Advanced Robotic Systems , vol. 15/5, 13 pages, October 2018.
3. A. Desai and D.J. Lee , "Efficient Feature Descriptor for Unmanned Aerial Vehicle Ground Moving Object Tracking,” AIAA Journal of Aerospace Information Systems, vol. 14/6 : p. 345-349, June 2017.
2. A. Desai and D.J. Lee, "Visual Odometry Drift Reduction Using SYBA Descriptor and Feature Transformation,” IEEE Transactions on Intelligent Transportation Systems, vol. 17/7, p. 1839-1851, July 2016.
1. A. Desai , D.J. Lee, and D. Ventura, "An Efficient Feature Descriptor Based on Synthetic Basis Functions and Uniqueness Matching Strategy”, Computer Vision and Image Understanding, vol. 142, p. 37-49, January 2016.
Summary:
Many image and signal processing techniques have been applied to medical and health care applications in recent years. A novel smart phone, vision-based indoor localization, and guidance system, called Seeing Eye Phone was developed to help the visually impaired as they navigate unfamiliar environment such as public buildings. Similar technique was applied to human pose and hand gesture recognition for human computer interface. A mobile App was developed for library book inventory and shelf-reading bu book spine image matching.
Journal Publications:
4. D. Zhang, D.J. Lee, and B. Taylor, "Seeing Eye Phone: A Smart Phone-based Indoor Guidance System for the Visually Impaired,” Machine Vision and Applications Journal, vol. 25/3, p. 811-822, April 2014.
3. D. Zhang, D.J. Lee, and Y.P. Chang, "A New Profile Shape Matching Stereo Vision Algorithm for Real-time Human Pose and Hand Gesture Recognition”, International Journal of Advanced Robotic Systems, vol. 11, February 2014.
2. S.G. Fowers and D.J. Lee, "An Effective Color Addition to Feature Detection and Description for Book Spine Image Matching”, International Scholarly Research Network–Machine Vision, vol. 2012, Article ID 945973, 15 pages, January 2012.
1. D.J. Lee, J.D. Anderson, and J.K. Archibald, "Hardware Implementation of Spline-based Genetic Algorithm for Embedded Stereo Vision Sensor Providing Real-time Visual Guidance to the Visually Impaired”, special issue on “Signal Processing for Applications in Healthcare Systems (AHS)” of the EURASIP Journal on Advances in Signal Processing, vol. 2008, 10 pages, June 2008.
Summary:
The onboard vision system is composed of an field programmable gate array board, and a custom interface daughterboard which allow it to provide data regarding drifting movements of the micro-unmanned aerial vehicle not detected by inertial measurement units. The algorithms implemented for the vision system include a Harris feature detector, template matching feature correlator, similarity-constrained homography by random sample consensus, color segmentation, radial distortion correction, and an extended Kalman filter with a standard-deviation outlier rejection technique. This vision system was designed specifically for use as an onboard vision solution for determining movement of micro-unmanned aerial vehicles that have severe size, weight, and power limitations. Video Demo.
Journal Publications:
4. A.W. Sumsion and D.J. Lee, “The Hummingbird Drone: Using Visual Coordination to Intersect Flying Objects”, 25th International Conference on Image Processing, Computer Vision, & Pattern Recognition, Las Vegas, NV, UAS, July 26-29, 2021.
3. S.G. Fowers, D.J. Lee, D. Ventura, and B.J. Tippetts, "Novel Feature Descriptor for Low-Resource Embedded Vision Sensors for Micro-UAV Applications”, AIAA Journal of Aerospace Information Systems, vol. 10/ 8, p. 385-395, August 2013.
2. B.J. Tippetts, D.J. Lee, and J.K Archibald, "An On-Board Vision Sensor System for Small Unmanned Vehicle Applications”, Machine Vision and Applications Journal, vol. 23/3, p. 405-413, May 2012.
1. B.J. Tippetts, D. J. Lee, S.G. Fowers, and J.K Archibald, "Real-Time Vision Sensor for an Autonomous Hovering Micro Unmanned Aerial Vehicle”, AIAA Journal of Aerospace Computing, Information, and Communication, vol. 6, p. 570-584, October 2009.
Summary:
We present a novel multi-frame structure from motion algorithm in which camera motion and object structure are calculated from optical flow probability distributions instead of a single optical flow estimate at each feature point. Optical flow distributions of the selected feature points allow us to quantify the accuracy of the optical flow estimate in any direction. With this additional knowledge, a more accurate structure from motion algorithm is created which relies on this more accurate optical flow data.
Journal Publications:
2. D.J. Lee, P.C. Merrell, Z.Y. Wei, and B.E. Nelson, "Two-Frame Structure from Motion Using Optical Flow Probability Distributions for Unmanned Air Vehicle Obstacle Avoidance”, Machine Vision and Applications Journal, vol. 21/3, p. 229-240, April 2010.
1. D.J. Lee, P.C. Merrell, B.E. Nelson, and Z.Y. Wei, "Multi-Frame Structure from Motion using Optical Flow Probability Distributions”, Journal of Neurocomputing, vol. 72/4-6, p. 1032-1041, January 2009.