Technical Program

Paper Detail

Paper IDL.2.2
Paper Title Optimality of Least-squares for Classification in Gaussian-Mixture Models
Authors Hossein Taheri, Ramtin Pedarsani, Christos Thrampoulidis, University of California, Santa Barbara, United States
Session L.2: Classification
Presentation Lecture
Track Statistics and Learning Theory
Manuscript  Click here to download the manuscript
Virtual Presentation  Click here to watch in the Virtual Symposium
Abstract We consider the problem of learning the coefficients of a linear classifier through Empirical Risk Minimization with a convex loss function in the high-dimensional setting. In particular, we introduce an approach to characterize the best achievable classification risk among convex losses, when data points follow a standard Gaussian-mixture model. Importantly, we prove that the square loss function achieves the minimum classification risk for this data model. Our numerical illustrations verify the theoretical results and show that they are accurate even for relatively small problem dimensions.

Plan Ahead

IEEE ISIT 2021

2021 IEEE International Symposium on Information Theory

11-16 July 2021 | Melbourne, Victoria, Australia

Visit Website!