Technical Program

Paper Detail

Paper IDL.4.2
Paper Title Entropy property testing with finitely many errors
Authors Changlong Wu, Narayana Santhanam, University of Hawaii at Manoa, United States
Session L.4: Distribution Learning
Presentation Lecture
Track Statistics and Learning Theory
Manuscript  Click here to download the manuscript
Virtual Presentation  Click here to watch in the Virtual Symposium
Abstract Let $\mathcal{P}$ be a class of distributions over natural numbers, and $A$ be a subset of $\mathbb{R}^+$. We study the problem of deciding, using \iid samples $X_1,X_2,\ldots$ from an unknown $p\in\cP$, whether the entropy $H(p)$ is in $A$ or not. The decision is updated based on every new observation $X_n$---we are interested in decision rules that make only finitely many errors no matter what the underlying source is. We give necessary and sufficient conditions on the class $\mathcal{P}$ and $A$ that can be decided with only finitely many errors. We show for example that such rules exist for testing the rationality of entropy within a given interval, for testing if the entropy falls in an interval of form $(a,b]$, but no such decision rule exists to determine if the entropy is finite or if the entropy falls in an interval of form $[a,b]$. In the process, we also highlight the conceptual foundation this framework shares with regularization.

Plan Ahead


2021 IEEE International Symposium on Information Theory

11-16 July 2021 | Melbourne, Victoria, Australia

Visit Website!