11 a.m. - noon Location: FN 2.102
University of California Irvine
Non-convex Relaxation Methods in Data Science
Constraints with discrete characters appear broadly in data science problems, such as sparsity in compressed sensing and low bit precision weights in deep learning. Even though it is possible to impose hard constraints, their continuous non-convex relaxations can be more effective, and readily integrated with various descent methods in unconstrained settings. We show examples in sparse signal recovery by L1 norm based non-convex penalties, and image classification of quantized deep neural networks.
Coffee to be served at the classroom 30 minutes prior to the talk.