•  24
    On characterizations of learnability with computable learners
    Proceedings of Machine Learning Research 178 3365-3379. 2022.
    We study computable PAC (CPAC) learning as introduced by Agarwal et al. (2020). First, we consider the main open question of finding characterizations of proper and improper CPAC learning. We give a characterization of a closely related notion of strong CPAC learning, and provide a negative answer to the COLT open problem posed by Agarwal et al. (2021) whether all decidably representable VC classes are improperly CPAC learnable. Second, we consider undecidability of (computable) PAC learnability…Read more
  •  36
    Deborah G. Mayo: Statistical Inference as Severe Testing: How to Get Beyond the Statistics Wars (review)
    Journal for General Philosophy of Science / Zeitschrift für Allgemeine Wissenschaftstheorie 51 (3): 507-510. 2020.
  •  11
    A generalized characterization of algorithmic probability
    Theory of Computing Systems 61 (4): 1337-1352. 2017.
    An a priori semimeasure (also known as “algorithmic probability” or “the Solomonoff prior” in the context of inductive inference) is defined as the transformation, by a given universal monotone Turing machine, of the uniform measure on the infinite strings. It is shown in this paper that the class of a priori semimeasures can equivalently be defined as the class of transformations, by all compatible universal monotone Turing machines, of any continuous computable measure in place of the uniform …Read more
  •  103
    Putnam construed the aim of Carnap’s program of inductive logic as the specification of a “universal learning machine,” and presented a diagonal proof against the very possibility of such a thing. Yet the ideas of Solomonoff and Levin lead to a mathematical foundation of precisely those aspects of Carnap’s program that Putnam took issue with, and in particular, resurrect the notion of a universal mechanical rule for induction. In this paper, I take up the question whether the Solomonoff–Levin pr…Read more
  •  57
    On Explaining the Success of Induction
    British Journal for the Philosophy of Science. forthcoming.
    Douven (in press) observes that Schurz's meta-inductive justification of induction cannot explain the great empirical success of induction, and offers an explanation based on computer simulations of the social and evolutionary development of our inductive practices. In this paper, I argue that Douven's account does not address the explanatory question that Schurz's argument leaves open, and that the assumption of the environment's induction-friendliness that is inherent to Douven's simulations i…Read more
  •  576
    Peirce, Pedigree, Probability
    Transactions of the Charles S. Peirce Society 58 (2): 138-166. 2022.
    An aspect of Peirce’s thought that may still be underappreciated is his resistance to what Levi calls _pedigree epistemology_, to the idea that a central focus in epistemology should be the justification of current beliefs. Somewhat more widely appreciated is his rejection of the subjective view of probability. We argue that Peirce’s criticisms of subjectivism, to the extent they grant such a conception of probability is viable at all, revert back to pedigree epistemology. A thoroughgoing reject…Read more
  •  90
    Universal Prediction: A Philosophical Investigation
    Dissertation, University of Groningen. 2018.
    In this thesis I investigate the theoretical possibility of a universal method of prediction. A prediction method is universal if it is always able to learn from data: if it is always able to extrapolate given data about past observations to maximally successful predictions about future observations. The context of this investigation is the broader philosophical question into the possibility of a formal specification of inductive or scientific reasoning, a question that also relates to modern-da…Read more
  •  110
    The meta-inductive justification of induction
    Episteme 17 (4): 519-541. 2020.
    I evaluate Schurz's proposed meta-inductive justification of induction, a refinement of Reichenbach's pragmatic justification that rests on results from the machine learning branch of prediction with expert advice. My conclusion is that the argument, suitably explicated, comes remarkably close to its grand aim: an actual justification of induction. This finding, however, is subject to two main qualifications, and still disregards one important challenge. The first qualification concerns the empi…Read more
  •  49
    The Metainductive Justification of Induction: The Pool of Strategies
    Philosophy of Science 86 (5): 981-992. 2019.
    This article poses a challenge to Schurz’s proposed metainductive justification of induction. It is argued that Schurz’s argument requires a notion of optimality that can deal with an expanding pool of prediction strategies.
  •  93
    Solomonoff Prediction and Occam’s Razor
    Philosophy of Science 83 (4): 459-479. 2016.
    Algorithmic information theory gives an idealized notion of compressibility that is often presented as an objective measure of simplicity. It is suggested at times that Solomonoff prediction, or algorithmic information theory in a predictive setting, can deliver an argument to justify Occam’s razor. This article explicates the relevant argument and, by converting it into a Bayesian framework, reveals why it has no such justificatory force. The supposed simplicity concept is better perceived as a…Read more
  •  67
    The no-free-lunch theorems of supervised learning
    with Peter D. Grünwald
    Synthese 199 (3-4): 9979-10015. 2021.
    The no-free-lunch theorems promote a skeptical conclusion that all possible machine learning algorithms equally lack justification. But how could this leave room for a learning theory, that shows that some algorithms are better than others? Drawing parallels to the philosophy of induction, we point out that the no-free-lunch results presuppose a conception of learning algorithms as purely data-driven. On this conception, every algorithm must have an inherent inductive bias, that wants justificat…Read more
  •  25
    Good Listeners, Wise Crowds, and Parasitic Experts
    with Jan-Willem Romeijn and Peter Grünwald
    Analyse & Kritik 34 (2): 399-408. 2012.
    This article comments on the article of Thorn and Schurz in this volume and focuses on, what we call, the problem of parasitic experts. We discuss that both meta- induction and crowd wisdom can be understood as pertaining to absolute reliability rather than comparative optimality, and we suggest that the involvement of reliability will provide a handle on this problem.
  •  63
    On the truth-convergence of open-minded bayesianism
    with Rianne de Heide
    Review of Symbolic Logic 15 (1): 64-100. 2022.
    Wenmackers and Romeijn (2016) formalize ideas going back to Shimony (1970) and Putnam (1963) into an open-minded Bayesian inductive logic, that can dynamically incorporate statistical hypotheses proposed in the course of the learning process. In this paper, we show that Wenmackers and Romeijn’s proposal does not preserve the classical Bayesian consistency guarantee of merger with the true hypothesis. We diagnose the problem, and offer a forward-looking open-minded Bayesians that does preserve a …Read more