• Thinking and being sure
    Philosophy and Phenomenological Research 106 (3): 634-654. 2022.
    How is what we believe related to how we act? That depends on what we mean by ‘believe’. On the one hand, there is what we're sure of: what our names are, where we were born, whether we are sitting in front of a screen. Surety, in this sense, is not uncommon — it does not imply Cartesian absolute certainty, from which no possible course of experience could dislodge us. But there are many things that we think that we are not sure of. For example, you might think that it will rain sometime this mo…Read more
  • Knowledge by constraint
    Philosophical Perspectives 35 (1): 1-28. 2021.
    This paper considers some puzzling knowledge ascriptions and argues that they present prima facie counterexamples to credence, belief, and justification conditions on knowledge, as well as to many of the standard meta-semantic assumptions about the context-sensitivity of ‘know’. It argues that these ascriptions provide new evidence in favor of contextualist theories of knowledge—in particular those that take the interpretation of ‘know’ to be sensitive to the mechanisms of constraint.
  • Thinking, Guessing, and Believing
    Ben Holguin
    Philosophers' Imprint 22 (1): 1-34. 2022.
    This paper defends the view, put roughly, that to think that p is to guess that p is the answer to the question at hand, and that to think that p rationally is for one’s guess to that question to be in a certain sense non-arbitrary. Some theses that will be argued for along the way include: that thinking is question-sensitive and, correspondingly, that ‘thinks’ is context-sensitive; that it can be rational to think that p while having arbitrarily low credence that p; that, nonetheless, rational …Read more
  • (Counter)factual want ascriptions and conditional belief
    Thomas Grano and Milo Phillips-Brown
    Journal of Philosophy 119 (12): 641-672. 2022.
    What are the truth conditions of want ascriptions? According to an influential approach, they are intimately connected to the agent’s beliefs: ⌜S wants p⌝ is true iff, within S’s belief set, S prefers the p worlds to the not-p worlds. This approach faces a well-known problem, however: it makes the wrong predictions for what we call (counter)factual want ascriptions, wherein the agent either believes p or believes not-p—for example, ‘I want it to rain tomorrow and that is exactly what is going to…Read more