|On Binary Statistical Classification from Mismatched Empirically Observed Statistics
|Hung-Wei Hsu, I-Hsiang Wang, National Taiwan University, Taipei, Taiwan, Taiwan
|Statistics and Learning Theory
|Click here to download the manuscript
|Click here to watch in the Virtual Symposium
|In this paper, we analyze the fundamental limit of statistical classification with mismatched empirically observed statistics. Unlike classical hypothesis testing where we have access to the distributions of data, now we only have two training sequences sampled i.i.d. from two unknown distributions P_0 and P_1 respectively. The goal is to classify a testing sequence sampled i.i.d. from one of the two candidate distributions, each of which is deviated slightly from P_0 and P_1 respectively. In other words, there is mismatch between how the training and testing sequences are generated. The amount of mismatch is measured by the norm of the deviation in the Euclidean space. Assuming the norm of deviation is not greater than δ, we derive an asymptotically optimal test in Chernoff's regime, and analyze its error exponents in both Stein's regime and Chernoff's regime. We also give both upper and lower bounds on the decrease of error exponents due to (i) unknown distributions (ii) mismatch in training and testing distributions. When δ is small, we show that the decrease in error exponents is linear in δ and characterize its first-order term.