Blacksburg, Virginia, United States of America
  •  358
    Severe testing as a basic concept in a neyman–pearson philosophy of induction
    British Journal for the Philosophy of Science 57 (2): 323-357. 2006.
    Despite the widespread use of key concepts of the Neyman–Pearson (N–P) statistical paradigm—type I and II errors, significance levels, power, confidence levels—they have been the subject of philosophical controversy and debate for over 60 years. Both current and long-standing problems of N–P tests stem from unclarity and confusion, even among N–P adherents, as to how a test's (pre-data) error probabilities are to be used for (post-data) inductive inference as opposed to inductive behavior. We ar…Read more
  •  318
    Experimental practice and an error statistical account of evidence
    Philosophy of Science 67 (3): 207. 2000.
    In seeking general accounts of evidence, confirmation, or inference, philosophers have looked to logical relationships between evidence and hypotheses. Such logics of evidential relationship, whether hypothetico-deductive, Bayesian, or instantiationist fail to capture or be relevant to scientific practice. They require information that scientists do not generally have (e.g., an exhaustive set of hypotheses), while lacking slots within which to include considerations to which scientists regularly…Read more
  •  246
    Ducks, Rabbits, and Normal Science: Recasting the Kuhn’s-Eye View of Popper’s Demarcation of Science
    British Journal for the Philosophy of Science 47 (2): 271-290. 1996.
    Kuhn maintains that what marks the transition to a science is the ability to carry out ‘normal’ science—a practice he characterizes as abandoning the kind of testing that Popper lauds as the hallmark of science. Examining Kuhn's own contrast with Popper, I propose to recast Kuhnian normal science. Thus recast, it is seen to consist of severe and reliable tests of low-level experimental hypotheses (normal tests) and is, indeed, the place to look to demarcate science. While thereby vindicating Kuh…Read more
  •  227
    Behavioristic, evidentialist, and learning models of statistical testing
    Philosophy of Science 52 (4): 493-516. 1985.
    While orthodox (Neyman-Pearson) statistical tests enjoy widespread use in science, the philosophical controversy over their appropriateness for obtaining scientific knowledge remains unresolved. I shall suggest an explanation and a resolution of this controversy. The source of the controversy, I argue, is that orthodox tests are typically interpreted as rules for making optimal decisions as to how to behave--where optimality is measured by the frequency of errors the test would commit in a long …Read more
  •  223
    In a recent discussion note Sober (1985) elaborates on the argument given in Sober (1982) to show the inadequacy of Ronald Giere's (1979, 1980) causal model for cases of frequency-dependent causation, and denies that Giere's (1984) response avoids the problem he raises. I argue that frequency-dependent effects do not pose a problem for Giere's original causal model, and that all parties in this dispute have been guity of misinterpreting the counterfactual populations involved in applying Giere's…Read more
  •  164
    I document some of the main evidence showing that E. S. Pearson rejected the key features of the behavioral-decision philosophy that became associated with the Neyman-Pearson Theory of statistics (NPT). I argue that NPT principles arose not out of behavioral aims, where the concern is solely with behaving correctly sufficiently often in some long run, but out of the epistemological aim of learning about causes of experimental results (e.g., distinguishing genuine from spurious effects). The view…Read more
  •  131
    Methodology in Practice: Statistical Misspecification Testing
    Philosophy of Science 71 (5): 1007-1025. 2004.
    The growing availability of computer power and statistical software has greatly increased the ease with which practitioners apply statistical methods, but this has not been accompanied by attention to checking the assumptions on which these methods are based. At the same time, disagreements about inferences based on statistical research frequently revolve around whether the assumptions are actually met in the studies available, e.g., in psychology, ecology, biology, risk assessment. Philosophica…Read more
  •  129
    Models of group selection
    with Norman L. Gilinsky
    Philosophy of Science 54 (4): 515-538. 1987.
    The key problem in the controversy over group selection is that of defining a criterion of group selection that identifies a distinct causal process that is irreducible to the causal process of individual selection. We aim to clarify this problem and to formulate an adequate model of irreducible group selection. We distinguish two types of group selection models, labeling them type I and type II models. Type I models are invoked to explain differences among groups in their respective rates of pr…Read more
  •  123
    Although both philosophers and scientists are interested in how to obtain reliable knowledge in the face of error, there is a gap between their perspectives that has been an obstacle to progress. By means of a series of exchanges between the editors and leaders from the philosophy of science, statistics and economics, this volume offers a cumulative introduction connecting problems of traditional philosophy of science to problems of inference in statistical and empirical modelling practice. Phil…Read more
  •  120
    Ontology & Methodology
    Synthese 192 (11): 3413-3423. 2015.
    Philosophers of science have long been concerned with the question of what a given scientific theory tells us about the contents of the world, but relatively little attention has been paid to how we set out to build theories and to the relevance of pre-theoretical methodology on a theory’s interpretation. In the traditional view, the form and content of a mature theory can be separated from any tentative ontological assumptions that went into its development. For this reason, the target of inter…Read more
  •  111
    Novel evidence and severe tests
    Philosophy of Science 58 (4): 523-552. 1991.
    While many philosophers of science have accorded special evidential significance to tests whose results are "novel facts", there continues to be disagreement over both the definition of novelty and why it should matter. The view of novelty favored by Giere, Lakatos, Worrall and many others is that of use-novelty: An accordance between evidence e and hypothesis h provides a genuine test of h only if e is not used in h's construction. I argue that what lies behind the intuition that novelty matter…Read more
  •  105
    The New Experimentalism, Topical Hypotheses, and Learning from Error
    PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association 1994 270-279. 1994.
    An important theme to have emerged from the new experimentalist movement is that much of actual scientific practice deals not with appraising full-blown theories but with the manifold local tasks required to arrive at data, distinguish fact from artifact, and estimate backgrounds. Still, no program for working out a philosophy of experiment based on this recognition has been demarcated. I suggest why the new experimentalism has come up short, and propose a remedy appealing to the practice of sta…Read more
  •  95
    We argue for a naturalistic account for appraising scientific methods that carries non-trivial normative force. We develop our approach by comparison with Laudan’s (American Philosophical Quarterly 24:19–31, 1987, Philosophy of Science 57:20–33, 1990) “normative naturalism” based on correlating means (various scientific methods) with ends (e.g., reliability). We argue that such a meta-methodology based on means–ends correlations is unreliable and cannot achieve its normative goals. We suggest an…Read more
  •  92
    In empirical modeling, an important desiderata for deeming theoretical entities and processes as real is that they can be reproducible in a statistical sense. Current day crises regarding replicability in science intertwines with the question of how statistical methods link data to statistical and substantive theories and models. Different answers to this question have important methodological consequences for inference, which are intertwined with a contrast between the ontological commitments o…Read more
  •  91
    I argue that the Bayesian Way of reconstructing Duhem's problem fails to advance a solution to the problem of which of a group of hypotheses ought to be rejected or "blamed" when experiment disagrees with prediction. But scientists do regularly tackle and often enough solve Duhemian problems. When they do, they employ a logic and methodology which may be called error statistics. I discuss the key properties of this approach which enable it to split off the task of testing auxiliary hypotheses fr…Read more
  •  91
    Peircean Induction and the Error-Correcting Thesis
    Transactions of the Charles S. Peirce Society 41 (2). 2005.
  •  76
    Novel work on problems of novelty? Comments on Hudson
    Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics 34 (1): 131-134. 2003.
  •  73
    How to discount double-counting when it counts: Some clarifications
    British Journal for the Philosophy of Science 59 (4): 857-879. 2008.
    The issues of double-counting, use-constructing, and selection effects have long been the subject of debate in the philosophical as well as statistical literature. I have argued that it is the severity, stringency, or probativeness of the test—or lack of it—that should determine if a double-use of data is admissible. Hitchcock and Sober ([2004]) question whether this ‘severity criterion' can perform its intended job. I argue that their criticisms stem from a flawed interpretation of the severity…Read more
  •  72
    Theories of statistical testing may be seen as attempts to provide systematic means for evaluating scientific conjectures on the basis of incomplete or inaccurate observational data. The Neyman-Pearson Theory of Testing (NPT) has purported to provide an objective means for testing statistical hypotheses corresponding to scientific claims. Despite their widespread use in science, methods of NPT have themselves been accused of failing to be objective; and the purported objectivity of scientific cl…Read more
  •  70
    We argue that a responsible analysis of today's evidence-based risk assessments and risk debates in biology demands a critical or metascientific scrutiny of the uncertainties, assumptions, and threats of error along the manifold steps in risk analysis. Without an accompanying methodological critique, neither sensitivity to social and ethical values, nor conceptual clarification alone, suffices. In this view, restricting the invitation for philosophical involvement to those wearing a "bioethicist…Read more
  •  64
    Error and the Growth of Experimental Knowledge
    with Michael Kruse
    Philosophical Review 107 (2): 324. 1998.
    Once upon a time, logic was the philosopher’s tool for analyzing scientific reasoning. Nowadays, probability and statistics have largely replaced logic, and their most popular application—Bayesianism—has replaced the qualitative deductive relationship between a hypothesis h and evidence e with a quantitative measure of h’s probability in light of e.
  •  62
    Some methodological issues in experimental economics
    Philosophy of Science 75 (5): 633-645. 2008.
    The growing acceptance and success of experimental economics has increased the interest of researchers in tackling philosophical and methodological challenges to which their work increasingly gives rise. I sketch some general issues that call for the combined expertise of experimental economists and philosophers of science, of experiment, and of inductive‐statistical inference and modeling. †To contact the author, please write to: 235 Major Williams, Virginia Tech, Blacksburg, VA 24061‐0126; e‐m…Read more
  •  61
    In defense of the Neyman-Pearson theory of confidence intervals
    Philosophy of Science 48 (2): 269-280. 1981.
    In Philosophical Problems of Statistical Inference, Seidenfeld argues that the Neyman-Pearson (NP) theory of confidence intervals is inadequate for a theory of inductive inference because, for a given situation, the 'best' NP confidence interval, [CIλ], sometimes yields intervals which are trivial (i.e., tautologous). I argue that (1) Seidenfeld's criticism of trivial intervals is based upon illegitimately interpreting confidence levels as measures of final precision; (2) for the situation which…Read more
  •  61
    The Philosophical Relevance of Statistics
    PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association 1980. 1980.
    While philosophers have studied probability and induction, statistics has not received the kind of philosophical attention mathematics and physics have. Despite increasing use of statistics in science, statistical advances have been little noted in the philosophy of science literature. This paper shows the relevance of statistics to both theoretical and applied problems of philosophy. It begins by discussing the relevance of statistics to the problem of induction and then discusses the reasoning…Read more
  •  56
    Error and the growth of experimental knowledge
    International Studies in the Philosophy of Science 15 (1): 455-459. 1996.
  •  55
    The error statistical account of testing uses statistical considerations, not to provide a measure of probability of hypotheses, but to model patterns of irregularity that are useful for controlling, distinguishing, and learning from errors. The aim of this paper is (1) to explain the main points of contrast between the error statistical and the subjective Bayesian approach and (2) to elucidate the key errors that underlie the central objection raised by Colin Howson at our PSA 96 Symposium