Blacksburg, Virginia, United States of America
  •  166
    I document some of the main evidence showing that E. S. Pearson rejected the key features of the behavioral-decision philosophy that became associated with the Neyman-Pearson Theory of statistics (NPT). I argue that NPT principles arose not out of behavioral aims, where the concern is solely with behaving correctly sufficiently often in some long run, but out of the epistemological aim of learning about causes of experimental results (e.g., distinguishing genuine from spurious effects). The view…Read more
  •  101
    We argue for a naturalistic account for appraising scientific methods that carries non-trivial normative force. We develop our approach by comparison with Laudan’s (American Philosophical Quarterly 24:19–31, 1987, Philosophy of Science 57:20–33, 1990) “normative naturalism” based on correlating means (various scientific methods) with ends (e.g., reliability). We argue that such a meta-methodology based on means–ends correlations is unreliable and cannot achieve its normative goals. We suggest an…Read more
  •  69
    Error and the Growth of Experimental Knowledge
    with Michael Kruse
    Philosophical Review 107 (2): 324. 1998.
    Once upon a time, logic was the philosopher’s tool for analyzing scientific reasoning. Nowadays, probability and statistics have largely replaced logic, and their most popular application—Bayesianism—has replaced the qualitative deductive relationship between a hypothesis h and evidence e with a quantitative measure of h’s probability in light of e.
  •  36
    Response to Howson and Laudan
    Philosophy of Science 64 (2): 323-333. 1997.
    A toast is due to one who slays Misguided followers of Bayes, And in their heart strikes fear and terror With probabilities of error! (E.L. Lehmann)
  •  223
    In a recent discussion note Sober (1985) elaborates on the argument given in Sober (1982) to show the inadequacy of Ronald Giere's (1979, 1980) causal model for cases of frequency-dependent causation, and denies that Giere's (1984) response avoids the problem he raises. I argue that frequency-dependent effects do not pose a problem for Giere's original causal model, and that all parties in this dispute have been guity of misinterpreting the counterfactual populations involved in applying Giere's…Read more
  •  62
    The Philosophical Relevance of Statistics
    PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association 1980. 1980.
    While philosophers have studied probability and induction, statistics has not received the kind of philosophical attention mathematics and physics have. Despite increasing use of statistics in science, statistical advances have been little noted in the philosophy of science literature. This paper shows the relevance of statistics to both theoretical and applied problems of philosophy. It begins by discussing the relevance of statistics to the problem of induction and then discusses the reasoning…Read more
  •  232
    Behavioristic, evidentialist, and learning models of statistical testing
    Philosophy of Science 52 (4): 493-516. 1985.
    While orthodox (Neyman-Pearson) statistical tests enjoy widespread use in science, the philosophical controversy over their appropriateness for obtaining scientific knowledge remains unresolved. I shall suggest an explanation and a resolution of this controversy. The source of the controversy, I argue, is that orthodox tests are typically interpreted as rules for making optimal decisions as to how to behave--where optimality is measured by the frequency of errors the test would commit in a long …Read more
  •  378
    Severe testing as a basic concept in a neyman–pearson philosophy of induction
    British Journal for the Philosophy of Science 57 (2): 323-357. 2006.
    Despite the widespread use of key concepts of the Neyman–Pearson (N–P) statistical paradigm—type I and II errors, significance levels, power, confidence levels—they have been the subject of philosophical controversy and debate for over 60 years. Both current and long-standing problems of N–P tests stem from unclarity and confusion, even among N–P adherents, as to how a test's (pre-data) error probabilities are to be used for (post-data) inductive inference as opposed to inductive behavior. We ar…Read more
  •  26
    Principles of inference and their consequences
    with Michael Kruse
    In David Corfield & Jon Williamson (eds.), Foundations of Bayesianism, Kluwer Academic Publishers. pp. 381--403. 2001.
  •  327
    Experimental practice and an error statistical account of evidence
    Philosophy of Science 67 (3): 207. 2000.
    In seeking general accounts of evidence, confirmation, or inference, philosophers have looked to logical relationships between evidence and hypotheses. Such logics of evidential relationship, whether hypothetico-deductive, Bayesian, or instantiationist fail to capture or be relevant to scientific practice. They require information that scientists do not generally have (e.g., an exhaustive set of hypotheses), while lacking slots within which to include considerations to which scientists regularly…Read more
  •  92
    I argue that the Bayesian Way of reconstructing Duhem's problem fails to advance a solution to the problem of which of a group of hypotheses ought to be rejected or "blamed" when experiment disagrees with prediction. But scientists do regularly tackle and often enough solve Duhemian problems. When they do, they employ a logic and methodology which may be called error statistics. I discuss the key properties of this approach which enable it to split off the task of testing auxiliary hypotheses fr…Read more
  •  107
    The New Experimentalism, Topical Hypotheses, and Learning from Error
    PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association 1994 270-279. 1994.
    An important theme to have emerged from the new experimentalist movement is that much of actual scientific practice deals not with appraising full-blown theories but with the manifold local tasks required to arrive at data, distinguish fact from artifact, and estimate backgrounds. Still, no program for working out a philosophy of experiment based on this recognition has been demarcated. I suggest why the new experimentalism has come up short, and propose a remedy appealing to the practice of sta…Read more
  •  76
    Novel work on problems of novelty? Comments on Hudson
    Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics 34 (1): 131-134. 2003.
  •  15
    How to Discount Double-Counting When It Counts: Some Clarifications
    British Journal for the Philosophy of Science 59 (4): 857-879. 2008.
    The issues of double-counting, use-constructing, and selection effects have long been the subject of debate in the philosophical as well as statistical literature. I have argued that it is the severity, stringency, or probativeness of the test—or lack of it—that should determine if a double-use of data is admissible. Hitchcock and Sober ([2004]) question whether this ‘severity criterion' can perform its intended job. I argue that their criticisms stem from a flawed interpretation of the severity…Read more
  •  41
  •  32
    Cartwright, Causality, and Coincidence
    PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association 1986. 1986.
    Cartwright argues for being a realist about theoretical entities but non-realist about theoretical laws. Her reason is that while the former involves causal explanation, the latter involves theoretical explanation; and inferences to causes, unlike inferences to theories, can avoid the redundancy objection--that one cannot rule out alternatives that explain the phenomena equally well. I sketch Cartwright's argument for inferring the most probable cause, focusing on Perrin's inference to molecular…Read more
  •  36
    Philosophy of Science Association
    In Richard Boyd, Philip Gasper & J. D. Trout (eds.), The Philosophy of Science, Mit Press. pp. 58--4. 1991.
  •  135
    Models of group selection
    with Norman L. Gilinsky
    Philosophy of Science 54 (4): 515-538. 1987.
    The key problem in the controversy over group selection is that of defining a criterion of group selection that identifies a distinct causal process that is irreducible to the causal process of individual selection. We aim to clarify this problem and to formulate an adequate model of irreducible group selection. We distinguish two types of group selection models, labeling them type I and type II models. Type I models are invoked to explain differences among groups in their respective rates of pr…Read more
  •  55
    The error statistical account of testing uses statistical considerations, not to provide a measure of probability of hypotheses, but to model patterns of irregularity that are useful for controlling, distinguishing, and learning from errors. The aim of this paper is (1) to explain the main points of contrast between the error statistical and the subjective Bayesian approach and (2) to elucidate the key errors that underlie the central objection raised by Colin Howson at our PSA 96 Symposium
  •  251
    Ducks, Rabbits, and Normal Science: Recasting the Kuhn’s-Eye View of Popper’s Demarcation of Science
    British Journal for the Philosophy of Science 47 (2): 271-290. 1996.
    Kuhn maintains that what marks the transition to a science is the ability to carry out ‘normal’ science—a practice he characterizes as abandoning the kind of testing that Popper lauds as the hallmark of science. Examining Kuhn's own contrast with Popper, I propose to recast Kuhnian normal science. Thus recast, it is seen to consist of severe and reliable tests of low-level experimental hypotheses (normal tests) and is, indeed, the place to look to demarcate science. While thereby vindicating Kuh…Read more