Dense Convolutional Object Detectors for Visual Assistive Systems On Mobile Platform

Open Access
Krishna, Vinayaka N
Graduate Program:
Computer Science
Master of Science
Document Type:
Master Thesis
Date of Defense:
April 10, 2018
Committee Members:
  • Vijaykrishnan Narayanan, Thesis Advisor
  • John Sampson, Committee Member
  • Bhuvan Urgaonkar, Committee Member
  • Convolutional Neural Network
  • DenseNets
  • ResNet
  • Deep Learning
  • Visual Assistance System
  • Mobile Deep Learning
There has been increased research effort into developing convolutional neural networks that can run efficiently on mobile and embedded platforms. Recent work has shown that providing the output of a convolution layer to all subsequent convolution layers further in the network results in networks with fewer parameters while being competitive with the state of the art in datasets such as CIFAR and ImageNet. In this thesis, we frame our work as a design space exploration of these networks to find optimal methods for scaling down the number of convolutional layers for mobile systems. The proposed feature extractors are combined with an object detection architecture for our goal application which is to use object detectors to develop assistive systems for the visually impaired. We call the proposed scaled down object detectors DYNets and evaluate their performance on the VOC dataset as well as a custom grocery dataset to simulate an example application. The proposed DYNets and baseline networks are deployed onto an iPhone to further assess the performance capability on actual mobile platforms with GPU acceleration. The accuracy and speed vs the number of operations is contrasted for the baseline and proposed networks. Our design space exploration reveals methods for better scaling in terms of the number of convolutional layers of these dense networks for mobile platforms. DYNets show competitive performance in terms of parameter efficiency as compared to the baseline networks while being able to run efficiently on mobile devices.