On Convex Optimization Methods for Fitting Spatial Statistical Models to Large Data Sets

Open Access
Davanloo Tajbakhsh, Sam
Graduate Program:
Industrial Engineering
Doctor of Philosophy
Document Type:
Date of Defense:
July 23, 2015
Committee Members:
  • Enrique Del Castillo, Dissertation Advisor
  • Necdet S Aybat, Dissertation Advisor
  • Vinayak V Shanbhag, Committee Member
  • James Landis Rosenberger, Committee Member
  • Murali Haran, Committee Member
  • Convex Optimization
  • Gaussian Random Field
  • Spatial Statistics
  • Alternating Direction Method of Multipliers
  • ADMM.
The emergence of remote and online sensing devices results in big spatial data sets for which developing powerful predictive models are of great interest. Given a spatio-temporally indexed data set, Gaussian Random Field (GRF) models are powerful prediction/classification tools which are used in different fields, e.g., in machine learning, geostatistics, and in engineering. In this thesis, we propose new methods for fitting different GRF models to large spatial data sets. From an optimization perspective, we seek computationally efficient algorithms that scale well with the size of the data sets. From a statistical perspective, we explore the convergence bounds of the resulting optimal estimators. A first goal of this dissertation is to propose an efficient algorithm that fits a second-order stationary and isotropic GRF model to large data sets. The dominant frequentist approach is Maximum Likelihood (ML) which requires non-convex optimization. The problem is aggravated in big data settings, given the $\cO(n^3)$ computational complexity of the ML method, where $n$ denotes the number of spatio-temporal data points. We make connections with the covariance selection problem and propose a two-stage algorithm with linear convergence in the first stage and a second stage that consists of solving a convex problem for a large class of spatial covariance functions. Furthermore, with proper tuning of a sparsity parameter in the first stage, the proposed algorithm has a theoretical convergence bound, in general. Large datasets are handled by two blocking (i.e., segmentation) schemes that enable parallel computation and ensure, without further formulation, that the predicted surface has no discontinuities at the block boundaries in contrast to other recently proposed GRF fitting methods for large datasets that also rely on segmentation. Our second goal in this thesis is to extend the proposed methods to more general classes of GRF models. The first extension consists of estimating the parameters of an anisotropic covariance function where convexity of the second stage optimization problem is shown for a dominant class of exponential covariance functions. Next, a novel algorithm is proposed for fitting multivariate GRF models with separable covariance functions. The proposed algorithm fits a multivariate GRF in four stages with provable convergence rates that highly benefits from parallel processing capabilities. Finally, an extension of the proposed algorithm for prediction problems under a class of continuous, non-Gaussian data known to have a nonparanormal distribution is studied. Applications of GRFs in modeling manufactured surfaces, emulating expensive to evaluate computer simulations, and predicting meteorological fields are presented towards the end of this dissertation.