Blacksburg, Virginia, United States of America
  •  15
    Error and the Growth of Experimental Knowledge
    University of Chicago. 1996.
    This text provides a critique of the subjective Bayesian view of statistical inference, and proposes the author's own error-statistical approach as an alternative framework for the epistemology of experiment. It seeks to address the needs of researchers who work with statistical analysis.
  •  4
    Cartwright, Causality, and Coincidence
    PSA Proceedings of the Biennial Meeting of the Philosophy of Science Association 1986 (1): 42-58. 1986.
    In How the Laws of Physics Lie (1983)2 Cartwright argues for being a realist about theoretical entities but non-realist about theoretical laws. Her reason for this distinction is that only the former involves causal explanation, and accepting causal explanations commits us to the existence of the causal entity invoked. “What is special about explanation by theoretical entity is that it is causal explanation, and existence is an internal characteristic of causal claims. There is nothing similar f…Read more
  •  39
    Some surprising facts about surprising facts
    Studies in History and Philosophy of Science Part A 45 79-86. 2014.
    A common intuition about evidence is that if data x have been used to construct a hypothesis H, then x should not be used again in support of H. It is no surprise that x fits H, if H was deliberately constructed to accord with x. The question of when and why we should avoid such “double-counting” continues to be debated in philosophy and statistics. It arises as a prohibition against data mining, hunting for significance, tuning on the signal, and ad hoc hypotheses, and as a preference for prede…Read more
  •  24
    While the common procedure of statistical significance testing and its accompanying concept of p-values have long been surrounded by controversy, renewed concern has been triggered by the replication crisis in science. Many blame statistical significance tests themselves, and some regard them as sufficiently damaging to scientific practice as to warrant being abandoned. We take a contrary position, arguing that the central criticisms arise from misunderstanding and misusing the statistical tools…Read more
  •  73
    How to discount double-counting when it counts: Some clarifications
    British Journal for the Philosophy of Science 59 (4): 857-879. 2008.
    The issues of double-counting, use-constructing, and selection effects have long been the subject of debate in the philosophical as well as statistical literature. I have argued that it is the severity, stringency, or probativeness of the test—or lack of it—that should determine if a double-use of data is admissible. Hitchcock and Sober ([2004]) question whether this ‘severity criterion' can perform its intended job. I argue that their criticisms stem from a flawed interpretation of the severity…Read more
  •  121
    Ontology & Methodology
    Synthese 192 (11): 3413-3423. 2015.
    Philosophers of science have long been concerned with the question of what a given scientific theory tells us about the contents of the world, but relatively little attention has been paid to how we set out to build theories and to the relevance of pre-theoretical methodology on a theory’s interpretation. In the traditional view, the form and content of a mature theory can be separated from any tentative ontological assumptions that went into its development. For this reason, the target of inter…Read more
  •  31
    Significance Tests: Vitiated or Vindicated by the Replication Crisis in Psychology?
    Review of Philosophy and Psychology 12 (1): 101-120. 2020.
    The crisis of replication has led many to blame statistical significance tests for making it too easy to find impressive looking effects that do not replicate. However, the very fact it becomes difficult to replicate effects when features of the tests are tied down actually serves to vindicate statistical significance tests. While statistical significance tests, used correctly, serve to bound the probabilities of erroneous interpretations of data, this error control is nullified by data-dredging…Read more
  • Error and the Growth of Experimental Knowledge
    British Journal for the Philosophy of Science 48 (3): 455-459. 1997.
  •  131
    Methodology in Practice: Statistical Misspecification Testing
    Philosophy of Science 71 (5): 1007-1025. 2004.
    The growing availability of computer power and statistical software has greatly increased the ease with which practitioners apply statistical methods, but this has not been accompanied by attention to checking the assumptions on which these methods are based. At the same time, disagreements about inferences based on statistical research frequently revolve around whether the assumptions are actually met in the studies available, e.g., in psychology, ecology, biology, risk assessment. Philosophica…Read more
  •  12
    Learning from Error
    Modern Schoolman 87 (3-4): 191-217. 2010.
  •  11
    Acceptable Evidence: Science and Values in Risk Management (edited book)
    with Rachelle D. Hollander
    Oxford University Press USA. 1991.
    Discussions of science and values in risk management have largely focused on how values enter into arguments about risks, that is, issues of acceptable risk. Instead this volume concentrates on how values enter into collecting, interpreting, communicating, and evaluating the evidence of risks, that is, issues of the acceptability of evidence of risk. By focusing on acceptable evidence, this volume avoids two barriers to progress. One barrier assumes that evidence of risk is largely a matter of o…Read more
  •  4
    There are two reasons, I claim, scientists do and should ignore standard philosophical theories of objective evidence: Such theories propose concepts that are far too weak to give scientists what they want from evidence, viz., a good reason to believe a hypothesis; and They provide concepts that make the evidential relationship a priori, whereas typically establishing an evidential claim requires empirical investigation.
  •  12
    Sociological versus metascientific views of technological Risk assessment
    In Kristin Shrader-Frechette & Laura Westra (eds.), Technology and Values, Rowman & Littlefield. pp. 217. 1997.
  •  22
    Learning from Error
    Modern Schoolman 87 (3-4): 191-217. 2010.
  •  25
    About Thinking (review)
    Teaching Philosophy 5 (1): 80-83. 1982.
  •  63
    Some methodological issues in experimental economics
    Philosophy of Science 75 (5): 633-645. 2008.
    The growing acceptance and success of experimental economics has increased the interest of researchers in tackling philosophical and methodological challenges to which their work increasingly gives rise. I sketch some general issues that call for the combined expertise of experimental economists and philosophers of science, of experiment, and of inductive‐statistical inference and modeling. †To contact the author, please write to: 235 Major Williams, Virginia Tech, Blacksburg, VA 24061‐0126; e‐m…Read more
  •  68
    In defense of the Neyman-Pearson theory of confidence intervals
    Philosophy of Science 48 (2): 269-280. 1981.
    In Philosophical Problems of Statistical Inference, Seidenfeld argues that the Neyman-Pearson (NP) theory of confidence intervals is inadequate for a theory of inductive inference because, for a given situation, the 'best' NP confidence interval, [CIλ], sometimes yields intervals which are trivial (i.e., tautologous). I argue that (1) Seidenfeld's criticism of trivial intervals is based upon illegitimately interpreting confidence levels as measures of final precision; (2) for the situation which…Read more
  •  56
    Error and the growth of experimental knowledge
    International Studies in the Philosophy of Science 15 (1): 455-459. 1996.
  •  164
    I document some of the main evidence showing that E. S. Pearson rejected the key features of the behavioral-decision philosophy that became associated with the Neyman-Pearson Theory of statistics (NPT). I argue that NPT principles arose not out of behavioral aims, where the concern is solely with behaving correctly sufficiently often in some long run, but out of the epistemological aim of learning about causes of experimental results (e.g., distinguishing genuine from spurious effects). The view…Read more
  •  98
    We argue for a naturalistic account for appraising scientific methods that carries non-trivial normative force. We develop our approach by comparison with Laudan’s (American Philosophical Quarterly 24:19–31, 1987, Philosophy of Science 57:20–33, 1990) “normative naturalism” based on correlating means (various scientific methods) with ends (e.g., reliability). We argue that such a meta-methodology based on means–ends correlations is unreliable and cannot achieve its normative goals. We suggest an…Read more
  •  65
    Error and the Growth of Experimental Knowledge
    with Michael Kruse
    Philosophical Review 107 (2): 324. 1998.
    Once upon a time, logic was the philosopher’s tool for analyzing scientific reasoning. Nowadays, probability and statistics have largely replaced logic, and their most popular application—Bayesianism—has replaced the qualitative deductive relationship between a hypothesis h and evidence e with a quantitative measure of h’s probability in light of e.
  •  36
    Response to Howson and Laudan
    Philosophy of Science 64 (2): 323-333. 1997.
    A toast is due to one who slays Misguided followers of Bayes, And in their heart strikes fear and terror With probabilities of error! (E.L. Lehmann)
  •  223
    In a recent discussion note Sober (1985) elaborates on the argument given in Sober (1982) to show the inadequacy of Ronald Giere's (1979, 1980) causal model for cases of frequency-dependent causation, and denies that Giere's (1984) response avoids the problem he raises. I argue that frequency-dependent effects do not pose a problem for Giere's original causal model, and that all parties in this dispute have been guity of misinterpreting the counterfactual populations involved in applying Giere's…Read more