11 a.m. - noon Location: SLC 2.302
Department of Mathematics
Massachusetts Institute of Technology
The asymptotic convergence rate of the Douglas Rachford iteration for basis pursuit
For large scale nonsmooth convex optimization problems, ﬁrst order methods involving only the subgradients are usually used thanks to their nice scaling to the problem size. Douglas-Rachford (DR) splitting is one of the most popular ﬁrst order methods in practice. It is well-known that DR applied on dual problem is equivalent to the widely used alternating direction method of multipliers (ADMM) and the relatively new split Bregman method. As motivating examples, ﬁrst we will brieﬂy review several famous convex recovery results including compressive sensing, matrix completion and PhaseLift, which represent a successful story of the convex relaxation approach attacking certain NP-hard linear inverse problems in the last decade. Along this line, we will then discuss some preliminary results of our own work on a directional component analysis problem by nuclear norm minimization. When DR is applied to these convex optimization problems, one interesting question of practical use is how the parameters in DR aﬀect the performance. We will show an explicit formula of the sharp asymptotic convergence rate of DR for the simple L1 minimization. The analysis will be verified on examples of processing seismic data in Curvetlet domain.
Sponsored by the Department of Mathematical Sciences
Host: Susan Minkoff
John Zweck, 972-883-6699
Questions? Email me.