On addressing nonconvexity, stochasticity and complementarity in mathematical optimization

Restricted (Penn State Only)
Xie, Yue
Graduate Program:
Industrial Engineering
Doctor of Philosophy
Document Type:
Date of Defense:
July 27, 2018
Committee Members:
  • Uday V Shanbhag, Dissertation Advisor
  • Uday V Shanbhag, Committee Chair
  • Necdet S Aybat, Committee Member
  • Tao Yao, Committee Member
  • Runze Li, Outside Member
  • Ethan Fang, Committee Member
  • nonconvex optimization
  • stochastic optimization
  • statistical learning
  • algorithm
  • variational inequality
  • robust optimization
In an era of growing informational needs, models arising from statistics, computer science, and related fields are increasingly driven by data. However, to reflect a range of practical and statistical requirements, such models have become far more complex; in particular, such models are complicated by a need for regularization, the presence of both high dimensionality and manifold uncertainty, amongst others. Collectively, these intricacies have led to challenging mathematical optimization problems complicated by nonconvexity, stochasticity, and complementarity. In this dissertation, motivated by the need to compute sparse solutions to statistical learning problems, either in the presence of a nonconvex regularizer or of a large amount of data, we consider three problem classes: (I) nonconvex programs; (II) stochastic programs; and (III) uncertain complementarity problems. In each instance, we develop a range of mathematical tools and algorithmic techniques with a focus on characterizing solutions and developing scalable, tractable, and convergent algorithms. (I) Of these, the first considers linearly constrained optimization problems with an l0-norm regularization. Such problems have particular relevance in statistical learning and our focus lies in the analysis of such models and the computation of stationary points of an equivalent continuous formulation. In particular, we consider the reformulation of such a problem as a mathematical program with complementarity constraints (MPCC) and its resulting solution via alternating directions method of multiplier (ADMM) schemes with tractable subproblems; (II) Our second topic concerns how the well-studied ADMM scheme can be effectively extended to contend with expectation-valued objectives where our emphasis is on deriving rate statements and overall iteration complexity bounds. We consider two distinct algorithms in such regimes and such schemes can be applied to statistical learning problems in the context of large dataset. (III) Finally, in our third study, we develop a framework for computing robust solutions to uncertain linear complementarity problems and its variants. This effort represents amongst the first attempts to develop a key generalization of existing robust approaches to allow for capturing uncertainty in a far broader class of applications that include convex Nash games, contact problems, and a breadth of equilibrium problems.