Classification accuracy as a proxy for two-sample testing

Ilmun Kim, Aaditya Ramdas, Aarti Singh, Larry Wasserman

Research output: Contribution to journalArticlepeer-review

24 Citations (Scopus)

Abstract

When data analysts train a classifier and check if its accuracy is significantly different from chance, they are implicitly performing a two-sample test. We investigate the statistical properties of this flexible approach in the high-dimensional setting. We prove two results that hold for all classifiers in any dimensions: if its true error remains ε-better than chance for some ε > 0 as d, n → ∞, then (a) the permutation-based test is consistent (has power approaching to one), (b) a computationally efficient test based on a Gaussian approximation of the null distribution is also consistent. To get a finer understanding of the rates of consistency, we study a specialized setting of distinguishing Gaussians with mean-difference δ and common (known or unknown) covariance Σ, when d/n → c ∈ (0, ∞). We study variants of Fisher's linear discriminant analysis (LDA) such as “naive Bayes” in a nontrivial regime when ε → 0 (the Bayes classifier has true accuracy approaching 1/2), and contrast their power with corresponding variants of Hotelling's test. Surprisingly, the expressions for their power match exactly in terms of n, d, δ, Σ, and the LDA approach is only worse by a constant factor, achieving an asymptotic relative efficiency (ARE) of 1/√π for balanced samples. We also extend our results to high-dimensional elliptical distributions with finite kurtosis. Other results of independent interest include minimax lower bounds, and the optimality of Hotelling's test when d = o(n). Simulation results validate our theory, and we present practical takeaway messages along with natural open problems.

Original languageEnglish
Pages (from-to)411-434
Number of pages24
JournalAnnals of Statistics
Volume49
Issue number1
DOIs
Publication statusPublished - 2021 Feb

Bibliographical note

Publisher Copyright:
© Institute of Mathematical Statistics, 2021

All Science Journal Classification (ASJC) codes

  • Statistics and Probability
  • Statistics, Probability and Uncertainty

Fingerprint

Dive into the research topics of 'Classification accuracy as a proxy for two-sample testing'. Together they form a unique fingerprint.

Cite this