A Randomized Nonlinear Rescaling Method in Large-Scale Constrained Convex Optimization

Bibliographic Details
Title: A Randomized Nonlinear Rescaling Method in Large-Scale Constrained Convex Optimization
Authors: Wei, Bo, Haskell, William B., Zhao, Sixiang
Publication Year: 2020
Collection: Mathematics
Subject Terms: Mathematics - Optimization and Control
More Details: We propose a new randomized algorithm for solving convex optimization problems that have a large number of constraints (with high probability). Existing methods like interior-point or Newton-type algorithms are hard to apply to such problems because they have expensive computation and storage requirements for Hessians and matrix inversions. Our algorithm is based on nonlinear rescaling (NLR), which is a primal-dual-type algorithm by Griva and Polyak {[{Math. Program., 106(2):237-259, 2006}]}. NLR introduces an equivalent problem through a transformation of the constraint functions, minimizes the corresponding augmented Lagrangian for given dual variables, and then uses this minimizer to update the dual variables for the next iteration. The primal update at each iteration is the solution of an unconstrained finite sum minimization problem where the terms are weighted by the current dual variables. We use randomized first-order algorithms to do these primal updates, for which they are especially well suited. In particular, we use the scaled dual variables as the sampling distribution for each primal update, and we show that this distribution is the optimal one among all probability distributions. We conclude by demonstrating the favorable numerical performance of our algorithm.
Document Type: Working Paper
Access URL: http://arxiv.org/abs/2003.10888
Accession Number: edsarx.2003.10888
Database: arXiv
More Details
Description not available.