Skip to main content
High Performance
Design and Implementation of Hardware Friendly Vision Algorithms
Convolutional Neural Networks
SYBA
Stereo Vision
BASIS
Optical Flow

Summary:
Binarized Neural Networks (BNNs) are deep neural networks that use binary values for activations and weights, instead of full precision values. With binary values, BNNs can execute computations using bitwise operations, which reduces execution time. We also explored a set of learned convolutional kernels which we call Jet Features. Jet Features are efficient to compute in software, easy to implement in hardware and perform well on visual inspection tasks.
Journal Publications:
2.     T.S. Simons and D.J. Lee, " A Review of Binarized Neural Networks ", Electronics, vol. 8(6), pp. 661-25 pages, June 2019.
1.        T.S. Simons and D.J. Lee, " Jet Features: Hardware-Friendly, Learned Convolutional Kernels for High-Speed Image Classification ", Electronics, vol. 8(5), pp. 588-20 pages, May 2019. (SCIE)

    Summary:
    Feature detection, description and matching are crucial steps  for  many computer vision algorithms. These steps rely on feature descriptors to be able to match image features across sets of images. Our SYnthetic BAsis (SYBA) feature descriptor offers superior performance to other binary descriptors. It was designed specifically for real-time embedded applications. Its hardware implementation on a field programmable gate array (FPGA) is a high-throughput low-latency solution which is critical for applications such as: high speed object detection and tracking, stereo vision, visual odometry, structure from motion, and optical flow.  We developed a new version of SYBA called SR-SYBA to improve its rotation and scaling invariances. FPGA Demo Video .
    Journal Publications:
    3.     M. Yu, D. Zhang, D.J. Lee, and A. Desai, " SR-SYBA: A Scale and Rotation Invariant Synthetic Basis Feature Descriptor with Low Memory Usage "Electronics, vol. 9(5), pp. 810-20pages, May 2020.
    2.       D.J. Lee, S.G. Fuller, and A.S. McCown" Optimization and Implementation of Synthetic Basis Feature Descriptor on FPGA ", Electronics, vol. 9(3), pp.391-21 pages, March 2020.
    1.       D. Zhang, L.A. Raven, D.J. Lee, Meng Yu, and A. Desai, " Hardware Friendly Robust Synthetic Basis Feature Descriptor ", Electronics, vol. 8(8), pp. 847-19 pages, July 2019.

      Summary:
      An efficient stereo vision hardware design implemented on an FPGA would be able to minimize payload and power consumption on micro-UVs, while providing them with 3D information and still leaving computational resources available for other processing tasks. We developed a hardware design of the efficient Profile Shape Matching stereo vision algorithm. We performed a review on stereo vision algorithms. We grouped them into three categories: 1) those that have published results of real-time or near realtime performance on standard processors, 2) those that have real-time performance on specialized hardware (i.e. GPU, FPGA, DSP, ASIC), and 3) those that have not been shown to obtain near real-time performance. This review is intended to aid those seeking algorithms suitable for real-time implementation on resource limited systems, and to encourage further research and development of the same by providing a snapshot of the status quo.
      Journal Publications:
      5.     B.J. Tippetts, D.J. Lee, K.D. Lillywhite, and J.K Archibald, "Review of Stereo Vision Algorithms and their Suitability for Resource Limited Systems”, Journal of Real-Time Image Processing, vol. 11/1, p. 5-25,  January 2016.
      4.      B.J. Tippetts, D.J. Lee, K.D. Lillywhite, and J.K Archibald, "Efficient Stereo Vision Algorithms for Resource Limited Systems”, Journal of Real-Time Image Processing, vol. 10/1, p. 163-174, March 2015.
      3.      B.J. Tippetts, D.J. Lee, K.D. Lillywhite, J.K Archibald, "Hardware-efficient Design of Real-time Profile Shape Matching Stereo Vision Algorithm on FPGA”, International Journal of Reconfigurable Computing, vol. 2014, Article ID 945926, 12 pages, February 2014.
      2.      B.J. Tippetts, D. J. Lee, J.K Archibald, and K.D. Lillywhite"Dense Disparity Real-time Stereo Vision Algorithm for Resource Limited Systems”, IEEE Transactions on Circuits and Systems for Video Technology, vol. 21/10, p. 1547-1555, October 2011.
      1.      D.J. Lee, J.D. Anderson, and J.K. Archibald, "Hardware Implementation of Spline-based Genetic Algorithm for Embedded Stereo Vision Sensor Providing Real-time Visual Guidance to the Visually Impaired”, special issue on “Signal Processing for Applications in Healthcare Systems (AHS)” of the EURASIP Journal on Advances in Signal Processing, vol. 2008, 10 pages, June 2008.

      Summary:
      The basis sparse-coding inspired similarity (BASIS) descriptor was designed for limited-resource applications such as an unmanned aerial vehicle embedded systems, small microprocessors, and small low-power field programmable gate array (FPGA) fabric. It utilizes sparse coding to create dictionary images that model the regions in the human visual cortex. Due to the reduced amount of computation required for computing BASIS descriptors, reduced descriptor size, and the ability to create the descriptors without the use of a floating point, this approach is an excellent candidate for FPGA hardware implementation. An improved version called Tree-BASIS was developed for UAV Imagery.
      Journal Publications:
      3.     S.G. FowersA. Desai, D.J. Lee, D. Ventura, and J.K Archibald, "Tree-Based Feature Descriptor and Its Hardware Implementation”, International Journal of Reconfigurable Computing, vol. 2014, Article ID 606210, 12 pages, November 2014. 
      2.     S.G. FowersA. Desai, D.J. Lee, D. Ventura, and D.K. Wilde, "Efficient Tree-Based Feature Descriptor and Matching Algorithm”,  AIAA Journal of Aerospace Information Systems, vol. 11/9, p. 596-606, September 2014.
      1.     S.G. Fowers , D.J. Lee, D. Ventura, and J.K Archibald, "The Nature Inspired BASIS Feature Descriptor for UAV Imagery and Its Hardware Implementation”, IEEE Transactions on Circuits and Systems for Video Technology, vol. 23/5, p. 756-768, May 2013.

      overrideTextColor=var(--secondaryColor) overrideCardAlternateTextColor= overrideDisableBackgroundImage= overrideTextAlignment= overrideCardHideSection= overrideCardHideByline= overrideCardHideDescription= overrideCardShowButton= overridebuttonBgColor= overrideButtonText=
      overrideTextColor=var(--secondaryColor) overrideCardAlternateTextColor= overrideDisableBackgroundImage= overrideTextAlignment= overrideCardHideSection= overrideCardHideByline= overrideCardHideDescription= overrideCardShowButton= overridebuttonBgColor= overrideButtonText=
      overrideTextColor=var(--secondaryColor) overrideCardAlternateTextColor= overrideDisableBackgroundImage= overrideTextAlignment= overrideCardHideSection= overrideCardHideByline= overrideCardHideDescription= overrideCardShowButton= overridebuttonBgColor= overrideButtonText=
      overrideTextColor=var(--secondaryColor) overrideCardAlternateTextColor= overrideDisableBackgroundImage= overrideTextAlignment= overrideCardHideSection= overrideCardHideByline= overrideCardHideDescription= overrideCardShowButton= overridebuttonBgColor= overrideButtonText=

      Summary:
      Accurate optical flow estimation is a crucial task for many computer vision applications. However, because of its computational power and processing speed requirements, it is rarely used for real-time obstacle detection, especially for small unmanned vehicle and embedded applications. A ridge regression-based optical flow algorithm was developed to cope with the existing collinear problem in traditional least-squares approaches for calculating optical flow. Additionally, taking advantage of hardware parallelism, spatial and temporal smoothing operations are applied to image sequence derivatives to improve accuracy. An efficient motion field analysis algorithm using the optical flow values and based on a simplified motion model was also developed and implemented in hardware for real-time obstacle detection for unmanned ground vehicle applications.
      Journal Publications:
      4.     Z.Y. Wei, D.J. Lee, B.E. Nelson, and J.K Archibald, "Hardware-Friendly Vision Algorithms for Embedded Obstacle Detection Applications”, IEEE Transactions on Circuits and Systems for Video Technology, vol. 20/11, p. 1577-1589, November 2010.
      3.        J.M. Bodily, B.E. Nelson, Z.Y. Wei, D.J. Lee, and J. Chase"A Comparison Study On Implementing Optical Flow and Digital Communications on FPGAs and GPUs”, ACM Transactions on Reconfigurable Technology and Systems, vol. 3/2, Article 6, 22 pages, May 2010.
      2.        Z.Y. Wei, D.J. Lee, B.E. Nelson, J.K. Archibald, and B.B. Edwards, "FPGA-Based Embedded Motion Estimation Sensor”, International Journal of Reconfigurable Computing, vol. 2008, Article ID 636145, 8 pages, July 2008.
      1.        Z.Y. Wei, D.J. Lee, and B.E. Nelson, "FPGA-based Real-time Optical Flow Algorithm Design and Implementation”, Journal of Multimedia, vol. 2/5, p. 38-45, September 2007.