Computational and Theoretical Perspectives on Complexity for Linear, Conic, and Non-conic Convex Optimization
MIT Cambridge, MA We consider the following quite general convex optimization problem format: G s.t. A x = b x in P where P is a closed convex set, not necessarily a cone. Linear
optimization is a special case of this format in the case when P is the
nonnegative orthant. Complexity theory for linear optimization has evolved
from the bit-model based on the encoding length L, and interior-point
methods (IPMs) have played a major role in suggesting new ways to think
about and to bound the complexity of solving linear optimization. One
avenue of research on complexity theory for linear optimization is based
on Renegar's condition number C(d) for the data d=(A,b,c) for G On the other hand, there are a number of drawbacks in using the
condition number theory to investigate the complexity of convex
optimization, including its restriction to conic problems, undue data
dependency, and the omission of warm-start information, among others. We
develop therefore have tried to develop a theory of complexity for convex
optimization using geometry-based measures rather than algebraic or
data-based measures for obtaining complexity bounds. We bound the
complexity of computing an almost-optimal solution of G |