A General Characterization of the Statistical Query Complexity

Statistical query (SQ) algorithms are algorithms that have access to an SQ oracle for the input distribution D. Given a query function , the oracle returns an estimate of within some tolerance . Such algorithms capture a broad spectrum of algorithmic approaches used in theory and practice.

In this work we give a sharp characterization of the complexity of solving general problems over distributions using SQ algorithms. Our characterization is based on a relatively simple notion of statistical dimension. Such characterizations were investigated over the past 20 years in learning theory but prior characterizations are restricted to distribution-specific PAC learning [10, 14, 55, 56, 7, 48, 26, 52]. In contrast, our characterization applies to general search and decision problems including those where the input distribution can be any distribution over an exponentially large domain. Our characterization is also the first to precisely characterize the necessary tolerance of queries. This is crucial in the range of more recent applications of SQ algorithms.

As an application of our techniques, we answer the following question that was open [36]: is the SQ complexity of distribution-independent learning upper-bounded by the maximum over all distributions of the SQ complexity of distribution-specific PAC learning? Our results also demonstrate a separation between efficient learning from examples in the presence of random noise and SQ learning. This improves on the separation of Blum et al. [12] and fully resolves an open problem of Kearns [38]. Finally, we demonstrate implications of our characterization to algorithms that are subject to memory, communication and local differential privacy constraints.

By: Vitaly Feldman

Published in: RJ10534 in 2016

RJ10534.pdf

Questions about this service can be mailed to reports@us.ibm.com .