Acceleration of monocular depth extraction for images

Open Access
Chandrashekhar, Anusha
Graduate Program:
Computer Science and Engineering
Master of Science
Document Type:
Master Thesis
Date of Defense:
July 16, 2014
Committee Members:
  • Vijaykrishnan Narayanan, Thesis Advisor
  • Monocular depth extraction
  • GPU
  • CUDA
  • Hardware acceleration
  • non-parametric depth
  • fast depth estimator
  • computer vision
This thesis evaluates and profiles a monocular depth estimation algorithm in which depth maps are generated from a single image using a non-parametric depth transfer approach. 3D depth from images has a wide range of applications in surveillance, tracking, robotics and general scene understanding. Recent work shows that depth can be used as an important cue in visual saliency in order to distinguish between similar objects. The depth transfer algorithm is evaluated on the Make3D and NYU datasets and the relative, logarithmic and RMS errors are evaluated for these datasets.It is shown that the depth transfer algorithm performs better than the state-of-the-art depth estimation algorithms. A multi-core CPU implementation of the depth transfer algorithm is profiled in order to determine the compute intensive stages in the algorithm. A Graphics Processing Unit (GPU) architecture using NVIDIA Compute Unified Device Architecture (CUDA) for accelerating the execution time of the bottleneck is proposed. The architecture makes efficient use of the GPU threads and memory which results in significant speed up. The GPU implementation is compared with the multi-core CPU implementation and it is shown that the proposed GPU architecture is capable of accelerating the algorithm by upto 4.3x (depending on image size) than the CPU-based implementation. A fast depth estimation technique is proposed to accelerate the computation of depth of moving objects in a video sequence. This method achieves significant speedup over the CPU and GPU implementation of the depth transfer algorithm, with processing rates that are closer to real time. The depth values from the fast depth estimator is compared to the ground truth depth values to show that the RMS error is significantly low and within an acceptable range.