Scalable Semidefinite Programming

Mon, Feb 10, 2020, 4:00 pm
214 Fine Hall
Center for Statistics and Machine Learning
Program in Applied and Computational Mathematics


Semidefinite programming (SDP) is a powerful framework from convex optimization that has striking potential for data science applications. This talk develops new provably correct algorithms for solving large SDP problems by economizing on both the storage and the arithmetic costs. We present two methods: one based on sketching, and the other on complementarity. Numerical evidence shows that these methods are effective for a range of applications, including relaxations of MaxCut, abstract phase retrieval, and quadratic assignment, and can handle SDP instances where the matrix variable has over 10^13 entries.

The first method modifies a standard primal-dual convex method - the conditional-gradient augmented Lagrangian method - to store (and work with) a sketched version of the primal. A low-rank approximation to the primal solution can be recovered from this sketch.

The second method begins with a new approximate complementarity principle:  Given an approximate solution to the dual SDP, the primal SDP has an approximate solution whose range is contained in the null space of the dual slack matrix.  For weakly constrained SDPs,  this null space has very low dimension, so this observation significantly reduces the search space for the primal solution.

This result suggests an algorithmic strategy that can be implemented with minimal storage:

1) Solve the dual SDP approximately (with any first-order solver)
2) compress the primal SDP to the null space of the dual slack matrix
3) solve the compressed primal SDP