Technical Program

Paper Detail

Paper IDE.5.5
Paper Title Two-Sample Testing on Pairwise Comparison Data and the Role of Modeling Assumptions
Authors Charvi Rastogi, Sivaraman Balakrishnan, Nihar Shah, Aarti Singh, Carnegie Mellon University, United States
Session E.5: Hypothesis Testing I
Presentation Lecture
Track Detection and Estimation
Manuscript  Click here to download the manuscript
Virtual Presentation  Click here to watch in the Virtual Symposium
Abstract A number of applications require two-sample testing of pairwise comparison data. For instance, in crowdsourcing, there is a long-standing question of whether comparison data provided by people is distributed similar to ratings-converted-to-comparisons. Other examples include sports data analysis and peer grading. In this paper, we design a two-sample test for pairwise comparison data. We establish an upper bound on the sample complexity required to correctly distinguish between the distributions of the two sets of samples. Our test requires essentially no assumptions on the distributions. We then prove complementary information-theoretic lower bounds showing that our results are tight (in the minimax sense) up to constant factors. We also investigate the role of modeling assumptions by proving information-theoretic lower bounds for a range of pairwise comparison models (WST, MST, SST, parameter-based such as BTL and Thurstone)

Plan Ahead


2021 IEEE International Symposium on Information Theory

11-16 July 2021 | Melbourne, Victoria, Australia

Visit Website!