-
9Error, tests and theory confirmationIn Deborah G. Mayo & Aris Spanos (eds.), Error and Inference: Recent Exchanges on Experimental Reasoning, Reliability, and the Objectivity and Rationality of Science, . pp. 125-154. 2010.
-
491. Marr on Computational-Level Theories Marr on Computational-Level Theories (pp. 477-500)Philosophy of Science 77 (4): 477-500. 2010.According to Marr, a computational-level theory consists of two elements, the what and the why. This article highlights the distinct role of the Why element in the computational analysis of vision. Three theses are advanced: that the Why element plays an explanatory role in computational-level theories, that its goal is to explain why the computed function is appropriate for a given visual task, and that the explanation consists in showing that the functional relations between the representing c…Read more
-
20Severity and Trustworthy Evidence: Foundational Problems versus Misuses of Frequentist TestingPhilosophy of Science 89 (2): 378-397. 2022.For model-based frequentist statistics, based on a parametric statistical model ${{\cal M}_\theta }$, the trustworthiness of the ensuing evidence depends crucially on the validity of the probabilistic assumptions comprising ${{\cal M}_\theta }$, the optimality of the inference procedures employed, and the adequateness of the sample size to learn from data by securing –. It is argued that the criticism of the postdata severity evaluation of testing results based on a small n by Rochefort-Maranda …Read more
-
28Causal Modeling, Explanation and Severe TestingIn Deborah G. Mayo & Aris Spanos (eds.), Error and Inference: Recent Exchanges on Experimental Reasoning, Reliability, and the Objectivity and Rationality of Science, Cambridge University Press. pp. 331-375. 2010.
-
20Stephen T. Ziliak and Deirdre N. McCloskey's The cult of statistical significance: how the standard error costs us jobs, justice, and lives. Ann Arbor (MI): The University of Michigan Press, 2008, xxiii+322 pp (review)Erasmus Journal for Philosophy and Economics 1 (1): 154. 2008.
-
80Philosophical Scrutiny of Evidence of Risks: From Bioethics to BioevidencePhilosophy of Science 73 (5): 803-816. 2006.We argue that a responsible analysis of today's evidence-based risk assessments and risk debates in biology demands a critical or metascientific scrutiny of the uncertainties, assumptions, and threats of error along the manifold steps in risk analysis. Without an accompanying methodological critique, neither sensitivity to social and ethical values, nor conceptual clarification alone, suffices. In this view, restricting the invitation for philosophical involvement to those wearing a "bioethicist…Read more
-
7Error in economics and the error statistical approach - Error in Economics. Towards a More Evidence-Based Methodology, Julian Reiss, Routledge, 2007, xxiv + 246 pages (review)Economics and Philosophy 25 (2): 206-210. 2009.
-
132Methodology in Practice: Statistical Misspecification TestingPhilosophy of Science 71 (5): 1007-1025. 2004.The growing availability of computer power and statistical software has greatly increased the ease with which practitioners apply statistical methods, but this has not been accompanied by attention to checking the assumptions on which these methods are based. At the same time, disagreements about inferences based on statistical research frequently revolve around whether the assumptions are actually met in the studies available, e.g., in psychology, ecology, biology, risk assessment. Philosophica…Read more
-
When do empirical data provide reliable evidence for a hypothesis (theory)? A review of Deborah G. Mayo's Error and the Growth of Experimental KnowledgeJournal of Economic Methodology 8 (3): 443-453. 2001.
-
13Graphical causal modeling and error statistics : exchanges with Clark GlymourIn Deborah G. Mayo & Aris Spanos (eds.), Error and Inference: Recent Exchanges on Experimental Reasoning, Reliability, and the Objectivity and Rationality of Science, Cambridge University Press. pp. 364. 2009.
-
Introduction and backgroundIn Deborah G. Mayo & Aris Spanos (eds.), Error and Inference: Recent Exchanges on Experimental Reasoning, Reliability, and the Objectivity and Rationality of Science, Cambridge University Press. 2009.
-
11Stephen T. Ziliak and Deirdre N. McCloskey's The cult of statistical significance: how the standard error costs us jobs, justice, and lives. Ann Arbor (MI): The University of Michigan Press, 2008, xxiii+322 pp (review)Erasmus Journal for Philosophy and Economics 1 (1): 154. 2008.
-
56Error in economics and the error statistical approach error in economics. Towards a more evidence-based methodology , Julian Reiss, Routledge, 2007, XXIV + 246 pages (review)Economics and Philosophy 25 (2): 206-210. 2009.
-
129Who Should Be Afraid of the Jeffreys-Lindley Paradox?Philosophy of Science 80 (1): 73-93. 2013.The article revisits the large n problem as it relates to the Jeffreys-Lindley paradox to compare the frequentist, Bayesian, and likelihoodist approaches to inference and evidence. It is argued that what is fallacious is to interpret a rejection of as providing the same evidence for a particular alternative, irrespective of n; this is an example of the fallacy of rejection. Moreover, the Bayesian and likelihoodist approaches are shown to be susceptible to the fallacy of acceptance. The key diffe…Read more
-
95Is frequentist testing vulnerable to the base-rate fallacy?Philosophy of Science 77 (4): 565-583. 2010.This article calls into question the charge that frequentist testing is susceptible to the base-rate fallacy. It is argued that the apparent similarity between examples like the Harvard Medical School test and frequentist testing is highly misleading. A closer scrutiny reveals that such examples have none of the basic features of a proper frequentist test, such as legitimate data, hypotheses, test statistics, and sampling distributions. Indeed, the relevant error probabilities are replaced with …Read more
-
381Severe testing as a basic concept in a neyman–pearson philosophy of inductionBritish Journal for the Philosophy of Science 57 (2): 323-357. 2006.Despite the widespread use of key concepts of the Neyman–Pearson (N–P) statistical paradigm—type I and II errors, significance levels, power, confidence levels—they have been the subject of philosophical controversy and debate for over 60 years. Both current and long-standing problems of N–P tests stem from unclarity and confusion, even among N–P adherents, as to how a test's (pre-data) error probabilities are to be used for (post-data) inductive inference as opposed to inductive behavior. We ar…Read more
-
52The discovery of argon: A case for learning from data?Philosophy of Science 77 (3): 359-380. 2010.Rayleigh and Ramsay discovered the inert gas argon in the atmospheric air in 1895 using a carefully designed sequence of experiments guided by an informal statistical analysis of the resulting data. The primary objective of this article is to revisit this remarkable historical episode in order to make a case that the error‐statistical perspective can be used to bring out and systematize (not to reconstruct) these scientists' resourceful ways and strategies for detecting and eliminating error, as…Read more
-
94Error statistical modeling and inference: Where methodology meets ontologySynthese 192 (11): 3533-3555. 2015.In empirical modeling, an important desiderata for deeming theoretical entities and processes as real is that they can be reproducible in a statistical sense. Current day crises regarding replicability in science intertwines with the question of how statistical methods link data to statistical and substantive theories and models. Different answers to this question have important methodological consequences for inference, which are intertwined with a contrast between the ontological commitments o…Read more
-
30Error in economics and the error statistical approachEconomics and Philosophy 25 (2): 206. 2009.
-
9On a new philosophy of frequentist inference : exchanges with David Cox and Deborah G. MayoIn Deborah G. Mayo & Aris Spanos (eds.), Error and Inference: Recent Exchanges on Experimental Reasoning, Reliability, and the Objectivity and Rationality of Science, Cambridge University Press. pp. 315. 2009.
-
98A frequentist interpretation of probability for model-based inductive inferenceSynthese 190 (9): 1555-1585. 2013.The main objective of the paper is to propose a frequentist interpretation of probability in the context of model-based induction, anchored on the Strong Law of Large Numbers (SLLN) and justifiable on empirical grounds. It is argued that the prevailing views in philosophy of science concerning induction and the frequentist interpretation of probability are unduly influenced by enumerative induction, and the von Mises rendering, both of which are at odds with frequentist model-based induction tha…Read more
-
Theory testing in economics and the error-statistical perspectiveIn Deborah G. Mayo & Aris Spanos (eds.), Error and Inference: Recent Exchanges on Experimental Reasoning, Reliability, and the Objectivity and Rationality of Science, Cambridge University Press, Cambridge. pp. 1-419. 2009.
-
10Foundational Issues in Statistical Modeling : Statistical Model SpecificationRationality, Markets and Morals 2 146-178. 2011.Statistical model specification and validation raise crucial foundational problems whose pertinent resolution holds the key to learning from data by securing the reliability of frequentist inference. The paper questions the judiciousness of several current practices, including the theory-driven approach, and the Akaike-type model selection procedures, arguing that they often lead to unreliable inferences. This is primarily due to the fact that goodness-of-fit/prediction measures and other substa…Read more
-
125Error and Inference: Recent Exchanges on Experimental Reasoning, Reliability, and the Objectivity and Rationality of Science (edited book)Cambridge University Press. 2009.Although both philosophers and scientists are interested in how to obtain reliable knowledge in the face of error, there is a gap between their perspectives that has been an obstacle to progress. By means of a series of exchanges between the editors and leaders from the philosophy of science, statistics and economics, this volume offers a cumulative introduction connecting problems of traditional philosophy of science to problems of inference in statistical and empirical modelling practice. Phil…Read more
-
32Revisiting Haavelmo's structural econometrics: bridging the gap between theory and dataJournal of Economic Methodology 22 (2): 171-196. 2015.The objective of the paper is threefold. First, to argue that some of Haavelmo's methodological ideas and insights have been neglected because they are largely at odds with the traditional perspective that views empirical modeling in economics as an exercise in curve-fitting. Second, to make a case that this neglect has contributed to the unreliability of empirical evidence in economics that is largely due to statistical misspecification. The latter affects the reliability of inference by induci…Read more
-
75Curve Fitting, the Reliability of Inductive Inference, and the Error‐Statistical ApproachPhilosophy of Science 74 (5): 1046-1066. 2007.The main aim of this paper is to revisit the curve fitting problem using the reliability of inductive inference as a primary criterion for the ‘fittest' curve. Viewed from this perspective, it is argued that a crucial concern with the current framework for addressing the curve fitting problem is, on the one hand, the undue influence of the mathematical approximation perspective, and on the other, the insufficient attention paid to the statistical modeling aspects of the problem. Using goodness-o…Read more
-
Virginia TechRegular Faculty
Blacksburg, Virginia, United States of America
Areas of Specialization
Epistemology |
Metaphysics |
Philosophy of Social Science |
Philosophy of Probability |