On the Analysis of Data-driven and Distributed Algorithms for Convex Optimization Problems

Open Access
Ahmadi, Hesamoddin
Graduate Program:
Industrial Engineering
Doctor of Philosophy
Document Type:
Date of Defense:
March 28, 2016
Committee Members:
  • Vinayak V Shanbhag, Dissertation Advisor
  • Necdet S Aybat, Committee Member
  • Guodong Pang, Committee Member
  • Minghui Zhu, Committee Member
  • Optimization and learning
  • distributed optimization in power system
  • convex optimization
  • augmented Lagrangian
This dissertation considers the resolution of three optimization problems. Of these, the first two problems are closely related and focus on solving optimization problems in which the problem parameters are misspecified but can be estimated through a parallel learning problem. The last problem focuses on the development of a distributed algorithm for the direct current formulation of the optimal power flow problem arising in power systems operation. Next, we provide a short description of each part of this dissertation. The first part of this work considers a misspecified optimization problem that requires minimizing a function f(x;q*) over a closed and convex set X where q* is an unknown vector of parameters that may be learnt by a parallel learning process. In this context, we examine the development of coupled schemes that generate iterates (x_k,q_k) as k goes to infinity, then x_k converges to x*, a minimizer of f(x;q*) over X and q_k converges to q*. In the first part of the work, we consider the solution of problems where f is either smooth or nonsmooth. In smooth strongly convex regimes, we demonstrate that such schemes still display a linear rate of convergence, albeit with inferior constants. When strong convexity assumptions are weakened, it can be shown that the convergence in function values sees a modification in the convergence rate of O(1/K) by an additive factor ||q_0 − q*||O(q_g^k + 1/K) where ||q_0 − q*|| represents the initial misspecification in q*, q_g denotes the contractive factor associated with the learning process and K denotes the number of projection steps. In both convex and strongly convex regimes, diminishing steplength schemes are also provided and are less reliant on the knowledge of problem parameters. Finally, we present an averaging-based subgradient scheme and show that the optimal constant steplength leads to a modification in the rate by ||q_0 − q*||O(q_g^k + 1/K), implying no effect on the standard rate of O(1/sqrt(K)). In the second part of the chapter, we consider the solution of misspecified monotone variational inequality problems, motivated by the need to contend with more general equilibrium problems as well as the possibility of misspecification in the constraints. In this context, we first present a constant steplength misspecified extra-gradient scheme and prove its asymptotic convergence. This scheme is reliant on problem parameters (such as Lipschitz constants) and leads us to present a misspecified variant of iterative Tikhonov regularization. Preliminary numerics support the asymptotic and rate statements. In the second portion of our work, we generalize the setting introduced in the first part and consider a misspecified optimization problem that requires minimizing of a convex function f(x;q*) in x over a misspecified constraint set represented by h(x;q*)<=0, where  is an unknown (or misspecified) vector of parameters. We develop a joint first-order inexact augmented Lagrangian scheme for computing x* while simultaneously learning q*. The term “inexact” refers to the property of the scheme that ensures that the Lagrangian subproblem is solved inexactly. In the first part of this chapter, we derive a non asymptotic convergence rate for primal suboptimality, dual suboptimality and primal infeasibilty when a constant penalty parameter is employed. Given a required e-accuracy in function value and when q*  is known, we prove that the iteration complexity of inexact augmented Lagrangian scheme is O(1/e). However, when q* is misspecified and learning is introduced to the problem, the iteration complexity of the joint scheme degenerates to O(1/e^4). Unfortunately, constant penalty Lagrangian schemes require tuning the penalty parameter, motivating the development of an increasing penalty parameter scheme. Specifically, in the second part of this chapter, we investigate the convergence rate of the joint scheme when an increasing sequence of penalty parameters is considered. In this setting, we show that if q* is known, the overall iteration complexity of inexact augmented Lagrangian scheme is O(1/e log(1/e)). As opposed to the case with constant penalty parameter, when learning is introduced to the problem, the order of iteration complexity of joint scheme remains unchanged. Finally, the empirical behavior of both schemes is studied on a portfolio optimization problem with a misspecified covariance matrix and preliminary numerical results support these findings. The third part of this dissertation focuses on developing a distributed algorithm for the direct current optimal power flow problem (DCOPF) in power systems operation. Distributed optimization techniques have gained popularity in such settings as they provide a scalable avenue for resolving a broad class of such problems while respecting privacy concerns. One such a problem of interest is the DCOPF problem which seeks to minimize the cost of generation subject to flow limits, capacity constraints, and voltage bounds. In this chapter, we consider the application of an alternative direction method of multipliers (ADMM) scheme on the dual formulation of DCOPF. By leveraging problem structure, we proceed to show that the updates can be reduced to a fully distributed scheme that relies on solving optimization problems at the nodal (or bus) level. We prove the convergence of the developed scheme and utilize known results to claim that the scheme admits a non-asymptotic rate of O(1/k). Furthermore, we show that the constant in the rate can be minimized by appropriately choosing the penalty parameter. Finally, the developed algorithm is tested on IEEE 24 bus system and preliminary results suggest promise.