Paper ID | L.2.5 | ||
Paper Title | On Binary Statistical Classification from Mismatched Empirically Observed Statistics | ||
Authors | Hung-Wei Hsu, I-Hsiang Wang, National Taiwan University, Taipei, Taiwan, Taiwan | ||
Session | L.2: Classification | ||
Presentation | Lecture | ||
Track | Statistics and Learning Theory | ||
Manuscript | Click here to download the manuscript | ||
Virtual Presentation | Click here to watch in the Virtual Symposium | ||
Abstract | In this paper, we analyze the fundamental limit of statistical classification with mismatched empirically observed statistics. Unlike classical hypothesis testing where we have access to the distributions of data, now we only have two training sequences sampled i.i.d. from two unknown distributions P_0 and P_1 respectively. The goal is to classify a testing sequence sampled i.i.d. from one of the two candidate distributions, each of which is deviated slightly from P_0 and P_1 respectively. In other words, there is mismatch between how the training and testing sequences are generated. The amount of mismatch is measured by the norm of the deviation in the Euclidean space. Assuming the norm of deviation is not greater than δ, we derive an asymptotically optimal test in Chernoff's regime, and analyze its error exponents in both Stein's regime and Chernoff's regime. We also give both upper and lower bounds on the decrease of error exponents due to (i) unknown distributions (ii) mismatch in training and testing distributions. When δ is small, we show that the decrease in error exponents is linear in δ and characterize its first-order term. |
Plan Ahead
2021 IEEE International Symposium on Information Theory
11-16 July 2021 | Melbourne, Victoria, Australia