Design of Large Field-of-View High-Resolution Miniaturized Imaging System

Open Access
Ahuja, Nilesh A
Graduate Program:
Electrical Engineering
Doctor of Philosophy
Document Type:
Date of Defense:
December 05, 2007
Committee Members:
  • Nirmal K Bose, Committee Chair
  • William Kenneth Jenkins, Committee Member
  • William Evan Higgins, Committee Member
  • Jesse Louis Barlow, Committee Member
  • wavelets
  • imaging system
  • large field of view
  • superresolution
In recent years, there has been an interest in the design of computational imaging systems to meet specific imaging goals. Such systems comprise of an optical component as well a digital processing component that acts on the output of the optical component to give a high quality output. Computational imaging systems have been used successfully, for example, to achieve enhanced depth-of-field. The design of a computational imaging system to achieve a large field-of-view (FOV) is still an open problem. Large FOV imaging systems find application in several areas such as surveillance, threat-detection and medical imaging. When designing imaging systems for such applications, there is generally a desire for high spatial resolution in addition to a large FOV. This entails the use of low $f$-number optics, which, in turn, results an image with very high spatial-frequency components at the focal plane. To avoid aliasing of the high-frequency content, the photoreceptor array must be sufficiently dense. Not only does this increase the cost and fabrication complexities, but the signal-to-noise ratio (SNR) of the captured image degrades with a decrease in pixel size. To circumvent this problem, the design of an imaging system based on the compound-eye of insects is proposed in this research. Instead of using a single lens to capture a large FOV, an array of smaller lenslets, each capturing a small FOV are arranged on a suitable curved surface. Moreover, in order to improve the SNR of the captured images, the pixel pitch in the focal-plane array (FPA) is deliberately kept large. To counter the resultant aliasing caused by this, superresolution techniques are used to combine the intensity values captured in neighboring LR frames to reconstruct a high-resolution (HR) image. The goal of image sequence superresolution is to obtain a single HR image from multiple LR images of a given scene. In order to achieve this, it is essential that all the LR images capture different information about the scene. The Papoulis-Brown generalized sampling theorem (GST) states conditions under which a one-dimensional bandlimited signal can be reconstructed from a set of its filtered and undersampled versions. An extension of the Papoulis-Brown GST, to encompass multidimensional as well as non-bandlimited signals, is proved as part of this research. This provides the theoretical basis for image sequence superresolution. The superresolution technique proposed in this research is based on the Moving Least Squares (MLS) method of approximation. The conventional MLS method involves estimating the intensity values at HR points by fitting a polynomial of a fixed order to a fixed neighborhood of LR points around the desired HR point. The amount of blur introduced by this process and the amount of noise-filtering that occurs depends on the order of the polynomial approximant and the size of the neighborhood. It is shown in this research that by varying these parameters adaptively for each HR grid-point, the resultant HR image produces images of superior visual quality by achieving the optimal trade-off between blur introduced and noise-filtering. Finally, the performance of the proposed large FOV imaging system with superresolution is tested by creating a virtual environment and capturing LR frames with a virtual camera whose parameters match the designed system.