Technical Program

Paper Detail

Paper IDL.7.1
Paper Title Limit Distribution for Smooth Total Variation and 𝛘²-Divergence in High Dimensions
Authors Ziv Goldfeld, Kengo Kato, Cornell University, United States
Session L.7: High-dimensional Statistics
Presentation Lecture
Track Statistics and Learning Theory
Manuscript  Click here to download the manuscript
Virtual Presentation  Click here to watch in the Virtual Symposium
Abstract Statistical divergences are ubiquitous in machine learning as tools for measuring discrepancy between probability distributions. As these applications inherently rely on approximating distributions from samples, we consider empirical approximation under two popular f-divergences: the total variation(TV) distance and the χ²-divergence. To circumvent the sensitivity of these divergences to support mismatch, the framework of Gaussian smoothing is adopted. We study the limit distributions of √nδTV(Pn∗Nσ,P∗Nσ) and nχ²(Pn∗Nσ‖P∗Nσ), where Pn is the empirical measure based on n independently and identically distributed (i.i.d.) observations from P, Nσ:=N(0,σ²Id), and ∗ stands for convolution. In arbitrary dimension, the limit distributions are characterized in terms of Gaussian process on Rᵈ with covariance operator that depends on P and the isotropic Gaussian density of parameter σ. This, in turn, implies optimality of then −1/2 expected value convergence rates recently derived for δTV(Pn∗Nσ,P∗Nσ) and χ²(Pn∗Nσ‖P∗Nσ). These strong statistical guarantees promote empirical approximation under Gaussian smoothing as a potent framework for learning and inference based on high-dimensional data.

Plan Ahead


2021 IEEE International Symposium on Information Theory

11-16 July 2021 | Melbourne, Victoria, Australia

Visit Website!