Fast Regularized Total Least Squares Methods With applications

Open Access
Lee, Geun Seop
Graduate Program:
Computer Science and Engineering
Doctor of Philosophy
Document Type:
Date of Defense:
June 12, 2013
Committee Members:
  • Jesse Louis Barlow, Dissertation Advisor
  • Padma Raghavan, Committee Member
  • Dr Robert Collins, Committee Member
  • Vishal Monga, Committee Member
  • Raj Acharya, Committee Member
  • Total Least Squares
  • Regularization
  • High Resolution Image Reconstruction
  • Newton method
The dissertation consists of two parts. The first part considers the regularized total least squares problems. Constructing linear systems are an appropriate approach in understanding and modeling inverse problems which arise in many fields in engineering and science. A least squares solution is a proper method to solve the overdetermined linear system when noise is included only in the observed data. However, when the data matrix is contaminated by noise as well as the observed data, Total Least Squares methods are more appropriate. Additionally, a regularization technique is considered when the data matrix is ill-conditioned, otherwise Total Least Squares produces a noise-dominant output. Thus, to solve ill-posed linear systems arising from inverse problems, we develop methods for regularized Total Least Squares problems. First we consider the Total Least Squares problems with Tikhonov regularization. Golub, Hansen, and O'Leary [SIAM J. Matrix Anal. Appl., 21 (2000), pp. 185-194] show that the problem of finding the Total Least Squares solution is equivalent to the solution of the two parameter linear system. Our method derives two nonlinear equations from the two parameter linear system, applies a Newton method with a nonsingular Jacobian matrix that is computed inexpensively. Another approach combines two projection methods for the two-parameter Newton iteration for the higher dimensional problems. Specifically, the first fixes the subspace dimension of the projection before the beginning the iterations by using Bidiagonal Reduction, and the second expands the subspace dynamically during the iterations by employing a generalized Krylov subspace expansion. Next, we consider another kind of regularization, which is denoted as a dual regularized Total Least Squares problem. Unlike the Tikhonov regularization which constrains the size of the solution, a dual regularized Total Least Squares problem considers two constraints; the one constrains the size of the error in the data matrix, the other constrains the size of the error in the observed data. Our method derives two nonlinear equations similar to the two-parameter Newton iteration method. However, since the Jacobian matrix is not guaranteed to be nonsingular, we adopt a trust-region based iteration method to obtain the solution. In the second part, template tracking problems are considered. Template tracking is the procedure of finding an image patch in each frame that is most similar to the given template. To improve the accuracy of the template tracking, appearance vectors are used to model appearance and illumination variations. This estimation involves computing the principal components of the augmented image matrix at each frame. Since computing the principal components is very expensive, we adopt the truncated bidiagonalization and the Truncated URV decomposition as alternatives to computing the principal components. In the experiments, we show that two methods are attractive alternatives in fast template tracking problems, especially when the size of the template image matrix grows.