One important problem in the philosophy of science is whether there can be a normative theory of discovery, as opposed to a normative theory of justification. Although the possibility of developing a logic of scientific discovery has been often doubted by philosophers, it is particularly interesting to consider how the basic insights of a normative theory of discovery have been turned into an effective research program in computer science, namely the research field of machine learning. In this p…
Read moreOne important problem in the philosophy of science is whether there can be a normative theory of discovery, as opposed to a normative theory of justification. Although the possibility of developing a logic of scientific discovery has been often doubted by philosophers, it is particularly interesting to consider how the basic insights of a normative theory of discovery have been turned into an effective research program in computer science, namely the research field of machine learning. In this paper, I introduce some current research on statistical models to a philosophical audience. In particular, I will stress those features of statistical models that make them plausible computational counterparts of scientific theories. After noticing how these models allow for the main kinds of inference that are a trademark of scientific theories, I will focus on the problem of learning statistical models from data. The analysis will show how machine learning is casting new light on traditional problems in the philosophy of science. More precisely, I will explore the implications of some results in statistical learning with respect to the role of simplicity in theory choice and to the role of scalability (as a formal property of induction) in making scientific discovery effective