•  555
    Traditional analyses of the curve fitting problem maintain that the data do not indicate what form the fitted curve should take. Rather, this issue is said to be settled by prior probabilities, by simplicity, or by a background theory. In this paper, we describe a result due to Akaike [1973], which shows how the data can underwrite an inference concerning the curve's form based on an estimate of how predictively accurate it will be. We argue that this approach throws light on the theoretical vir…Read more
  •  245
    The Emergence of the Macroworld: A Study of Intertheory Relations in Classical and Quantum Mechanics
    with Alexey Kryukov
    Philosophy of Science 70 (5): 1039-1051. 2003.
    Classical mechanics is empirically successful because the probabilistic mean values of quantum mechanical observables follow the classical equations of motion to a good approximation (Messiah 1970, 215). We examine this claim for the one-dimensional motion of a particle in a box, and extend the idea by deriving a special case of the ideal gas law in terms of the mean value of a generalized force used to define "pressure." The examples illustrate the importance of probabilistic averaging as a met…Read more
  •  179
    Counterexamples to a likelihood theory of evidence
    Minds and Machines 16 (3): 319-338. 2006.
    The likelihood theory of evidence (LTE) says, roughly, that all the information relevant to the bearing of data on hypotheses (or models) is contained in the likelihoods. There exist counterexamples in which one can tell which of two hypotheses is true from the full data, but not from the likelihoods alone. These examples suggest that some forms of scientific reasoning, such as the consilience of inductions (Whewell, 1858. In Novum organon renovatum (Part II of the 3rd ed.). The philosophy of th…Read more
  •  159
    Although in every inductive inference, an act of invention is requisite, the act soon slips out of notice. Although we bind together facts by superinducing upon them a new Conception, this Conception, once introduced and applied, is looked upon as inseparably connected with the facts, and necessarily implied in them. Having once had the phenomena bound together in their minds in virtue of the Conception men can no longer easily restore them back to the detached and incoherent condition in which …Read more
  •  155
    Predictive accuracy as an achievable goal of science
    Proceedings of the Philosophy of Science Association 2002 (3). 2002.
    What has science actually achieved? A theory of achievement should define what has been achieved, describe the means or methods used in science, and explain how such methods lead to such achievements. Predictive accuracy is one truth‐related achievement of science, and there is an explanation of why common scientific practices tend to increase predictive accuracy. Akaike’s explanation for the success of AIC is limited to interpolative predictive accuracy. But therein lies the strength of the gen…Read more
  •  151
    A Philosopher’s Guide to Empirical Success
    Philosophy of Science 74 (5): 588-600. 2007.
    The simple question, what is empirical success? turns out to have a surprisingly complicated answer. We need to distinguish between meritorious fit and ‘fudged fit', which is akin to the distinction between prediction and accommodation. The final proposal is that empirical success emerges in a theory dependent way from the agreement of independent measurements of theoretically postulated quantities. Implications for realism and Bayesianism are discussed. ‡This paper was written when I was a visi…Read more
  •  141
    Ramsey, Stick and Garon (1991) argue that if the correct theory of mind is some parallel distributed processing theory, then folk psychology must be false. Their idea is that if the nodes and connections that encode one representation are causally active then all representations encoded by the same set of nodes and connections are also causally active. We present a clear, and concrete, counterexample to RSG's argument. In conclusion, we suggest that folk psychology and connectionism are best und…Read more
  •  105
    Textbooks in quantum mechanics frequently claim that quantum mechanics explains the success of classical mechanics because “the mean values [of quantum mechanical observables] follow the classical equations of motion to a good approximation,” while “the dimensions of the wave packet be small with respect to the characteristic dimensions of the problem.” The equations in question are Ehrenfest’s famous equations. We examine this case for the one-dimensional motion of a particle in a box, and exte…Read more
  •  96
    How do simple rules `fit to reality' in a complex world?
    Minds and Machines 9 (4): 543-564. 1999.
    The theory of fast and frugal heuristics, developed in a new book called Simple Heuristics that make Us Smart (Gigerenzer, Todd, and the ABC Research Group, in press), includes two requirements for rational decision making. One is that decision rules are bounded in their rationality –- that rules are frugal in what they take into account, and therefore fast in their operation. The second is that the rules are ecologically adapted to the environment, which means that they `fit to reality.' The ma…Read more
  •  94
    Bayes and Bust: Simplicity as a Problem for a Probabilist’s Approach to Confirmation (review)
    British Journal for the Philosophy of Science 46 (3): 399-424. 1995.
    The central problem with Bayesian philosophy of science is that it cannot take account of the relevance of simplicity and unification to confirmation, induction, and scientific inference. The standard Bayesian folklore about factoring simplicity into the priors, and convergence theorems as a way of grounding their objectivity are some of the myths that Earman's book does not address adequately. 1Review of John Earman: Bayes or Bust?, Cambridge, MA. MIT Press, 1992, £33.75cloth.
  •  91
    Counterfactual reasoning in the bell-epr paradox
    Philosophy of Science 53 (1): 133-144. 1986.
    Skyrms's formulation of the argument against stochastic hidden variables in quantum mechanics using conditionals with chance consequences suffers from an ambiguity in its "conservation" assumption. The strong version, which Skyrms needs, packs in a "no-rapport" assumption in addition to the weaker statement of the "experimental facts." On the positive side, I argue that Skyrms's proof has two unnoted virtues (not shared by previous proofs): (1) it shows that certain difficulties that arise for d…Read more
  •  91
    Unification and Scientific Realism Revisited
    PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association 1986. 1986.
    Van Fraassen has argued that quantum mechanics does not conform to the pattern of common cause explanation used by Salmon as a precise formulation of Smart's 'cosmic coincidence' argument for scientific realism. This paper adds to this list some common examples from classical physics that also do not conform to Salmon's explanatory schema. This is bad news and good news for the realist. The bad news is that Salmon's argument for realism does not work; the good news is that realism need not deman…Read more
  •  90
    The Frugal Inference of Causal Relations
    with Garvesh Raskutti, Reuben Stern, and Naftali Weinberger
    British Journal for the Philosophy of Science 69 (3): 821-848. 2018.
    Recent approaches to causal modelling rely upon the causal Markov condition, which specifies which probability distributions are compatible with a directed acyclic graph. Further principles are required in order to choose among the large number of DAGs compatible with a given probability distribution. Here we present a principle that we call frugality. This principle tells one to choose the DAG with the fewest causal arrows. We argue that frugality has several desirable properties compared to th…Read more
  •  87
    The golfer's dilemma: A reply to Kukla on curve-fitting
    British Journal for the Philosophy of Science 46 (3): 348-360. 1995.
    Curve-fitting typically works by trading off goodness-of-fit with simplicity, where simplicity is measured by the number of adjustable parameters. However, such methods cannot be applied in an unrestricted way. I discuss one such correction, and explain why the exception arises. The same kind of probabilistic explanation offers a surprising resolution to a common-sense dilemma.
  •  70
    What is induction? John Stuart Mill (1874, p. 208) defined induction as the operation of discovering and proving general propositions. William Whewell (in Butts, 1989, p. 266) agrees with Mill’s definition as far as it goes. Is Whewell therefore assenting to the standard concept of induction, which talks of inferring a generalization of the form “All As are Bs” from the premise that “All observed As are Bs”? Does Whewell agree, to use Mill’s example, that inferring “All humans are mortal” from t…Read more
  •  68
    Model selection in science: The problem of language variance
    British Journal for the Philosophy of Science 50 (1): 83-102. 1999.
    Recent solutions to the curve-fitting problem, described in Forster and Sober ([1995]), trade off the simplicity and fit of hypotheses by defining simplicity as the paucity of adjustable parameters. Scott De Vito ([1997]) charges that these solutions are 'conventional' because he thinks that the number of adjustable parameters may change when the hypotheses are described differently. This he believes is exactly what is illustrated in Goodman's new riddle of induction, otherwise known as the grue…Read more
  •  63
    Unification, explanation, and the composition of causes in Newtonian mechanics
    Studies in History and Philosophy of Science Part A 19 (1): 55-101. 1988.
    William Whewell’s philosophy of scientific discovery is applied to the problem of understanding the nature of unification and explanation by the composition of causes in Newtonian mechanics. The essay attempts to demonstrate: the sense in which ”approximate’ laws successfully refer to real physical systems rather than to idealizations of them; why good theoretical constructs are not badly underdetermined by observation; and why, in particular, Newtonian forces are not conventional and how empiri…Read more
  •  62
    The paper provides a formal proof that efficient estimates of parameters, which vary as as little as possible when measurements are repeated, may be expected to provide more accurate predictions. The definition of predictive accuracy is motivated by the work of Akaike (1973). Surprisingly, the same explanation provides a novel solution for a well known problem for standard theories of scientific confirmation — the Ravens Paradox. This is significant in light of the fact that standard Bayesian an…Read more
  •  62
    Deductive logic is about the validity of arguments. An argument is valid when its conclusion follows deductively from its premises. Here’s an example: If Alice is guilty then Bob is guilty, and Alice is guilty. Therefore, Bob is guilty. The validity of the argument has nothing to do with what the argument is about. It has nothing to do with the meaning, or content, of the argument beyond the meaning of logical phrases such as if…then. Thus, any argument of the following form (called modus ponens…Read more
  •  51
    This chapter examines four solutions to the problem of many models, and finds some fault or limitation with all of them except the last. The first is the naïve empiricist view that best model is the one that best fits the data. The second is based on Popper’s falsificationism. The third approach is to compare models on the basis of some kind of trade off between fit and simplicity. The fourth is the most powerful: Cross validation testing.
  •  47
    Scientific discovery is often regarded as romantic and creative - and hence unanalyzable - whereas the everyday process of verifying discoveries is sober and more suited to analysis. Yet this fascinating exploration of how scientific work proceeds argues that however sudden the moment of discovery may seem, the discovery process can be described and modeled. Using the methods and concepts of contemporary information-processing psychology (or cognitive science) the authors develop a series of art…Read more
  •  45
    Sober (1984) has considered the problem of determining the evidential support, in terms of likelihood, for a hypothesis that is incomplete in the sense of not providing a unique probability function over the event space in its domain. Causal hypotheses are typically like this because they do not specify the probability of their initial conditions. Sober's (1984) solution to this problem does not work, as will be shown by examining his own biological examples of common cause explanation. The prop…Read more
  •  45
    Kenneth Wilson won the Nobel Prize in Physics in 1982 for applying renormalization group, which he learnt from quantum field theory (QFT), to problems in statistical physics—the induced magnetization of materials (ferromagnetism) and the evaporation and condensation of fluids (phase transitions). See Wilson (1983). The renormalization group got its name from its early applications in QFT. There, it appeared to be a rather ad hoc method of subtracting away unwanted infinities. The further allegat…Read more
  •  33
    The Value of Good Illustrative Examples: In order to speak as generally as possible about science, philosophers of science have traditionally formulated their theses in terms of elementary logic and elementary probability theory. They often point to real scientific examples without explaining them in detail and/or use artificial examples that fail to fit with intricacies of real examples. Sometimes their illustrative examples are chosen to fit their framework, rather than the science. Frequently…Read more
  •  33
    Wayne Myrvold (2003) has captured an important feature of unified theories, and he has done so in Bayesian terms. What is not clear is whether the virtue of such unification is most clearly understood in terms of Bayesian confirmation. I argue that the virtue of such unification is better understood in terms of other truth-related virtues such as predictive accuracy.
  •  31
    Predictive Accuracy as an Achievable Goal of Science
    Philosophy of Science 69 (S3). 2002.
    What has science actually achieved? A theory of achievement should define what has been achieved, describe the means or methods used in science, and explain how such methods lead to such achievements. Predictive accuracy is one truth-related achievement of science, and there is an explanation of why common scientific practices tend to increase predictive accuracy. Akaike's explanation for the success of AIC is limited to interpolative predictive accuracy. But therein lies the strength of the gen…Read more