Open Problem: Are all VC classes learnable with computable learners?
Sushant Agarwal , Nivasini Ananthakrishnan , Shai Ben-David , Tosca Lechner , Ruth Urner
Session: Open Problems
Abstract:
The independence from set theory of learnability results ([Ben-David et al. 2017, 2019]), motivated us to consider learnability by algorithms with computable output predictors (both learners and predictors are then representable as finite objects). In [Agrawal et al 2020] we introduced the notion of CPAC learnability, by adding some basic computability requirements into a PAC learning framework. As a first step we have shown there that, in contrast with the well studied case of learnability without such restrictions, in this framework proper learnability of hypothesis classes is not implied by the finiteness of their VC-dimension. A major remaining open question is whether a similar result holds even for improper learning. Namely, does there exist a computable concept class of finite VC-dimension (of computable classifiers) such that no computable learner can PAC learn it (even if the learner is not restricted tooutput a hypothesis that is a member of the class)?