
7Stephen T. Ziliak and Deirdre N. McCloskey's The cult of statistical significance: how the standard error costs us jobs, justice, and lives. Ann Arbor : The University of Michigan Press, 2008, xxiii+322 pp (review)Erasmus Journal for Philosophy and Economics 1 (1): 154. 2008.

46Philosophical Scrutiny of Evidence of Risks: From Bioethics to BioevidencePhilosophy of Science 73 (5): 803816. 2006.We argue that a responsible analysis of today's evidencebased risk assessments and risk debates in biology demands a critical or metascientific scrutiny of the uncertainties, assumptions, and threats of error along the manifold steps in risk analysis. Without an accompanying methodological critique, neither sensitivity to social and ethical values, nor conceptual clarification alone, suffices. In this view, restricting the invitation for philosophical involvement to those wearing a "bioethicist…Read more

5Review of Error in Economics. Towards a More EvidenceBased Methodology (review)Economics and Philosophy 25 (2): 206210. 2009.

91Methodology in Practice: Statistical Misspecification TestingPhilosophy of Science 71 (5): 10071025. 2004.The growing availability of computer power and statistical software has greatly increased the ease with which practitioners apply statistical methods, but this has not been accompanied by attention to checking the assumptions on which these methods are based. At the same time, disagreements about inferences based on statistical research frequently revolve around whether the assumptions are actually met in the studies available, e.g., in psychology, ecology, biology, risk assessment. Philosophica…Read more

51Error statistical modeling and inference: Where methodology meets ontologySynthese 192 (11): 35333555. 2015.In empirical modeling, an important desiderata for deeming theoretical entities and processes as real is that they can be reproducible in a statistical sense. Current day crises regarding replicability in science intertwines with the question of how statistical methods link data to statistical and substantive theories and models. Different answers to this question have important methodological consequences for inference, which are intertwined with a contrast between the ontological commitments o…Read more

20Error in economics and the error statistical approachEconomics and Philosophy 25 (2): 206. 2009.

When do empirical data provide reliable evidence for a hypothesis (theory)? A review of Deborah G. Mayo's Error and the Growth of Experimental KnowledgeJournal of Economic Methodology 8 (3): 443453. 2001.

3On a new philosophy of frequentist inference : exchanges with David Cox and Deborah G. MayoIn Deborah G. Mayo & Aris Spanos (eds.), Error and Inference: Recent Exchanges on Experimental Reasoning, Reliability, and the Objectivity and Rationality of Science, Cambridge University Press. pp. 315. 2010.

57A frequentist interpretation of probability for modelbased inductive inferenceSynthese 190 (9): 15551585. 2013.The main objective of the paper is to propose a frequentist interpretation of probability in the context of modelbased induction, anchored on the Strong Law of Large Numbers (SLLN) and justifiable on empirical grounds. It is argued that the prevailing views in philosophy of science concerning induction and the frequentist interpretation of probability are unduly influenced by enumerative induction, and the von Mises rendering, both of which are at odds with frequentist modelbased induction tha…Read more

35Revisiting the omitted variables argument: Substantive vs. statistical adequacyJournal of Economic Methodology 13 (2): 179218. 2006.The problem of omitted variables is commonly viewed as a statistical misspecification issue which renders the inference concerning the influence of X t on yt unreliable, due to the exclusion of certain relevant factors W t . That is, omitting certain potentially important factors W t may confound the influence of X t on yt . The textbook omitted variables argument attempts to assess the seriousness of this unreliability using the sensitivity of the estimator to the inclusion/exclusion of W t , b…Read more

9Foundational Issues in Statistical Modeling : Statistical Model SpecificationRationality, Markets and Morals 2 146178. 2011.Statistical model specification and validation raise crucial foundational problems whose pertinent resolution holds the key to learning from data by securing the reliability of frequentist inference. The paper questions the judiciousness of several current practices, including the theorydriven approach, and the Akaiketype model selection procedures, arguing that they often lead to unreliable inferences. This is primarily due to the fact that goodnessoffit/prediction measures and other substa…Read more

78Error and Inference: Recent Exchanges on Experimental Reasoning, Reliability, and the Objectivity and Rationality of Science (edited book)Cambridge University Press. 2009.Although both philosophers and scientists are interested in how to obtain reliable knowledge in the face of error, there is a gap between their perspectives that has been an obstacle to progress. By means of a series of exchanges between the editors and leaders from the philosophy of science, statistics and economics, this volume offers a cumulative introduction connecting problems of traditional philosophy of science to problems of inference in statistical and empirical modelling practice. Phil…Read more

46Revisiting data mining: 'hunting' with or without a licenseJournal of Economic Methodology 7 (2): 231264. 2000.The primary objective of this paper is to revisit a number of empirical modelling activities which are often characterized as data mining, in an attempt to distinguish between the problematic and the nonproblematic cases. The key for this distinction is provided by the notion of errorstatistical severity. It is argued that many unwarranted data mining activities often arise because of inherent weaknesses in the Traditional Textbook (TT) methodology. Using the Probabilistic Reduction (PR) appro…Read more

54Curve Fitting, the Reliability of Inductive Inference, and the Error‐Statistical ApproachPhilosophy of Science 74 (5): 10461066. 2007.The main aim of this paper is to revisit the curve fitting problem using the reliability of inductive inference as a primary criterion for the ‘fittest' curve. Viewed from this perspective, it is argued that a crucial concern with the current framework for addressing the curve fitting problem is, on the one hand, the undue influence of the mathematical approximation perspective, and on the other, the insufficient attention paid to the statistical modeling aspects of the problem. Using goodnesso…Read more

31The discovery of argon: A case for learning from data?Philosophy of Science 77 (3): 359380. 2010.Rayleigh and Ramsay discovered the inert gas argon in the atmospheric air in 1895 using a carefully designed sequence of experiments guided by an informal statistical analysis of the resulting data. The primary objective of this article is to revisit this remarkable historical episode in order to make a case that the error‐statistical perspective can be used to bring out and systematize (not to reconstruct) these scientists' resourceful ways and strategies for detecting and eliminating error, as…Read more

8Graphical causal modeling and error statistics : exchanges with Clark GlymourIn Deborah G. Mayo & Aris Spanos (eds.), Error and Inference: Recent Exchanges on Experimental Reasoning, Reliability, and the Objectivity and Rationality of Science, Cambridge University Press. pp. 364. 2010.

Introduction and backgroundIn Deborah G. Mayo & Aris Spanos (eds.), Error and Inference: Recent Exchanges on Experimental Reasoning, Reliability, and the Objectivity and Rationality of Science, Cambridge University Press. 2010.

10Revisiting Haavelmo's structural econometrics: bridging the gap between theory and dataJournal of Economic Methodology 22 (2): 171196. 2015.The objective of the paper is threefold. First, to argue that some of Haavelmo's methodological ideas and insights have been neglected because they are largely at odds with the traditional perspective that views empirical modeling in economics as an exercise in curvefitting. Second, to make a case that this neglect has contributed to the unreliability of empirical evidence in economics that is largely due to statistical misspecification. The latter affects the reliability of inference by induci…Read more

41Error in economics and the error statistical approach error in economics. Towards a more evidencebased methodology , Julian Reiss, Routledge, 2007, XXIV + 246 pages (review)Economics and Philosophy 25 (2): 206210. 2009.

Theory testing in economics and the errorstatistical perspectiveIn Deborah G. Mayo & Aris Spanos (eds.), Error and Inference: Recent Exchanges on Experimental Reasoning, Reliability, and the Objectivity and Rationality of Science, Cambridge University Press, Cambridge. pp. 1419. 2010.

49Is frequentist testing vulnerable to the baserate fallacy?Philosophy of Science 77 (4): 565583. 2010.This article calls into question the charge that frequentist testing is susceptible to the baserate fallacy. It is argued that the apparent similarity between examples like the Harvard Medical School test and frequentist testing is highly misleading. A closer scrutiny reveals that such examples have none of the basic features of a proper frequentist test, such as legitimate data, hypotheses, test statistics, and sampling distributions. Indeed, the relevant error probabilities are replaced with …Read more

241Severe testing as a basic concept in a neyman–pearson philosophy of inductionBritish Journal for the Philosophy of Science 57 (2): 323357. 2006.Despite the widespread use of key concepts of the Neyman–Pearson (N–P) statistical paradigm—type I and II errors, significance levels, power, confidence levels—they have been the subject of philosophical controversy and debate for over 60 years. Both current and longstanding problems of N–P tests stem from unclarity and confusion, even among N–P adherents, as to how a test's (predata) error probabilities are to be used for (postdata) inductive inference as opposed to inductive behavior. We ar…Read more

2Review of ST Ziliak and DN McCloskey's The Cult of Statistical Significance (review)Erasmus Journal for Philosophy and Economics 1 (1): 154164. 2008.

Virginia TechRegular Faculty
Blacksburg, Virginia, United States of America
Areas of Specialization
Epistemology 
Metaphysics 
Philosophy of Social Science 
Philosophy of Probability 