The concept of parsimony or sparsity has been fundamental in the development of statistical theory and techniques. Indeed\, statistical models that account for sparsity have been shown to be more interpretable and have superior generalization properties. However\, imposing sparsity constraints comes at the cost of non-convexity and difficult-to-solve inference problems. Thus\, a substantial amount of research over the past 20 years has been devoted to developing tractable approximations to such inference problems with sparsity. Unfortunately\, most of the existing techniques achieve sparsity at the cost of inducing substantial bias to the models\, hampering the quality of the estimators.

This talk focuses on how to bridge the existing gap between the non-convex problems with sparsity and their convex approximations. Specifically\, it discusses how ideas from combinatorial optimization and disjunctive programming can be used to derive tight convex relaxations of well-known inference problems with sparsity\, including best subset selection. The relaxations can be interpreted as non-convex regularizations that do not induce any bias to sparse estimators\, and yet have better sparsity-inducing properties than state-of-the-art techniques. Moreover\, the new relaxations can be implemented using off-the-shelf conic optimization software\, and solved in seconds in problems with hundreds or thousands of variables. Computations with real and synthetic datasets indicate that the resulting estimators are often optimal or near-optimal for the original non-convex inference problems\, and result in much better estimation properties than existing alternatives.

**BIO\:** André\;s Gó\;mez received his B.S. in Mathematics and B.S. in Computer Science from the Universidad de los Andes (Colombia) in 2011 and 2012\, respectively. He then obtained his M.S. and Ph.D. in Industrial Engineering and Operations Research from the University of California Berkeley in 2014 and 2017\, respectively. From 2017 to 2019\, Dr. Gó\;mez worked as an Assistant Professor in the Department of Industrial Engineering at the University of Pittsburgh\, and since 2019 he is an Assistant Professor in the Department of Industrial and Systems Engineering at the University of Southern California. Dr. Gó\;mez research focuses on developing new theory and tools for challenging optimization problems arising in finance\, machine learning and statistics.