•  33
    1. Marr on Computational-Level Theories Marr on Computational-Level Theories (pp. 477-500)
    with Oron Shagrir, John D. Norton, Holger Andreas, Jouni-Matti Kuukkanen, Eckhart Arnold, Elliott Sober, Peter Gildenhuys, and Adela Helena Roszkowski
    Philosophy of Science 77 (4): 477-500. 2010.
    According to Marr, a computational-level theory consists of two elements, the what and the why. This article highlights the distinct role of the Why element in the computational analysis of vision. Three theses are advanced: that the Why element plays an explanatory role in computational-level theories, that its goal is to explain why the computed function is appropriate for a given visual task, and that the explanation consists in showing that the functional relations between the representing c…Read more
  •  18
    For model-based frequentist statistics, based on a parametric statistical model ${{\cal M}_\theta }$, the trustworthiness of the ensuing evidence depends crucially on the validity of the probabilistic assumptions comprising ${{\cal M}_\theta }$, the optimality of the inference procedures employed, and the adequateness of the sample size to learn from data by securing –. It is argued that the criticism of the postdata severity evaluation of testing results based on a small n by Rochefort-Maranda …Read more
  •  52
    We argue that a responsible analysis of today's evidence-based risk assessments and risk debates in biology demands a critical or metascientific scrutiny of the uncertainties, assumptions, and threats of error along the manifold steps in risk analysis. Without an accompanying methodological critique, neither sensitivity to social and ethical values, nor conceptual clarification alone, suffices. In this view, restricting the invitation for philosophical involvement to those wearing a "bioethicist…Read more
  •  124
    Methodology in Practice: Statistical Misspecification Testing
    Philosophy of Science 71 (5): 1007-1025. 2004.
    The growing availability of computer power and statistical software has greatly increased the ease with which practitioners apply statistical methods, but this has not been accompanied by attention to checking the assumptions on which these methods are based. At the same time, disagreements about inferences based on statistical research frequently revolve around whether the assumptions are actually met in the studies available, e.g., in psychology, ecology, biology, risk assessment. Philosophica…Read more
  •  83
    In empirical modeling, an important desiderata for deeming theoretical entities and processes as real is that they can be reproducible in a statistical sense. Current day crises regarding replicability in science intertwines with the question of how statistical methods link data to statistical and substantive theories and models. Different answers to this question have important methodological consequences for inference, which are intertwined with a contrast between the ontological commitments o…Read more
  •  90
    The main objective of the paper is to propose a frequentist interpretation of probability in the context of model-based induction, anchored on the Strong Law of Large Numbers (SLLN) and justifiable on empirical grounds. It is argued that the prevailing views in philosophy of science concerning induction and the frequentist interpretation of probability are unduly influenced by enumerative induction, and the von Mises rendering, both of which are at odds with frequentist model-based induction tha…Read more
  •  10
    Foundational Issues in Statistical Modeling : Statistical Model Specification
    Rationality, Markets and Morals 2 146-178. 2011.
    Statistical model specification and validation raise crucial foundational problems whose pertinent resolution holds the key to learning from data by securing the reliability of frequentist inference. The paper questions the judiciousness of several current practices, including the theory-driven approach, and the Akaike-type model selection procedures, arguing that they often lead to unreliable inferences. This is primarily due to the fact that goodness-of-fit/prediction measures and other substa…Read more
  •  116
    Although both philosophers and scientists are interested in how to obtain reliable knowledge in the face of error, there is a gap between their perspectives that has been an obstacle to progress. By means of a series of exchanges between the editors and leaders from the philosophy of science, statistics and economics, this volume offers a cumulative introduction connecting problems of traditional philosophy of science to problems of inference in statistical and empirical modelling practice. Phil…Read more
  •  17
    Revisiting Haavelmo's structural econometrics: bridging the gap between theory and data
    Journal of Economic Methodology 22 (2): 171-196. 2015.
    The objective of the paper is threefold. First, to argue that some of Haavelmo's methodological ideas and insights have been neglected because they are largely at odds with the traditional perspective that views empirical modeling in economics as an exercise in curve-fitting. Second, to make a case that this neglect has contributed to the unreliability of empirical evidence in economics that is largely due to statistical misspecification. The latter affects the reliability of inference by induci…Read more
  •  70
    The main aim of this paper is to revisit the curve fitting problem using the reliability of inductive inference as a primary criterion for the ‘fittest' curve. Viewed from this perspective, it is argued that a crucial concern with the current framework for addressing the curve fitting problem is, on the one hand, the undue influence of the mathematical approximation perspective, and on the other, the insufficient attention paid to the statistical modeling aspects of the problem. Using goodness-o…Read more
  •  112
    Who Should Be Afraid of the Jeffreys-Lindley Paradox?
    Philosophy of Science 80 (1): 73-93. 2013.
    The article revisits the large n problem as it relates to the Jeffreys-Lindley paradox to compare the frequentist, Bayesian, and likelihoodist approaches to inference and evidence. It is argued that what is fallacious is to interpret a rejection of as providing the same evidence for a particular alternative, irrespective of n; this is an example of the fallacy of rejection. Moreover, the Bayesian and likelihoodist approaches are shown to be susceptible to the fallacy of acceptance. The key diffe…Read more
  •  70
    Is frequentist testing vulnerable to the base-rate fallacy?
    Philosophy of Science 77 (4): 565-583. 2010.
    This article calls into question the charge that frequentist testing is susceptible to the base-rate fallacy. It is argued that the apparent similarity between examples like the Harvard Medical School test and frequentist testing is highly misleading. A closer scrutiny reveals that such examples have none of the basic features of a proper frequentist test, such as legitimate data, hypotheses, test statistics, and sampling distributions. Indeed, the relevant error probabilities are replaced with …Read more
  •  333
    Severe testing as a basic concept in a neyman–pearson philosophy of induction
    British Journal for the Philosophy of Science 57 (2): 323-357. 2006.
    Despite the widespread use of key concepts of the Neyman–Pearson (N–P) statistical paradigm—type I and II errors, significance levels, power, confidence levels—they have been the subject of philosophical controversy and debate for over 60 years. Both current and long-standing problems of N–P tests stem from unclarity and confusion, even among N–P adherents, as to how a test's (pre-data) error probabilities are to be used for (post-data) inductive inference as opposed to inductive behavior. We ar…Read more
  •  41
    The discovery of argon: A case for learning from data?
    Philosophy of Science 77 (3): 359-380. 2010.
    Rayleigh and Ramsay discovered the inert gas argon in the atmospheric air in 1895 using a carefully designed sequence of experiments guided by an informal statistical analysis of the resulting data. The primary objective of this article is to revisit this remarkable historical episode in order to make a case that the error‐statistical perspective can be used to bring out and systematize (not to reconstruct) these scientists' resourceful ways and strategies for detecting and eliminating error, as…Read more