•  345
    Many philosophers have argued that large language models (LLMs) subvert the traditional undergraduate philosophy paper. For the enthusiastic, LLMs merely subvert the traditional idea that students ought to write philosophy papers “entirely on their own.” For the more pessimistic, LLMs merely facilitate plagiarism. We believe that these controversies neglect a more basic crisis. We argue that, because one can, with minimal philosophical effort, use LLMs to produce outputs that at least “look lik…Read more
  •  161
    Fallibilism, closure, and pragmatic encroachment
    Philosophical Studies 173 (10): 2745-2757. 2016.
    I argue that fallibilism, single-premise epistemic closure, and one formulation of the “knowledge-action principle” are inconsistent. I will consider a possible way to avoid this incompatibility, by advocating a pragmatic constraint on belief in general, rather than just knowledge. But I will conclude that this is not a promising option for defusing the problem. I do not argue here for any one way of resolving the inconsistency.
  •  44
    Many philosophers have argued that large language models (LLMs) subvert the traditional undergraduate philosophy paper. For the enthusiastic, LLMs merely subvert the traditional idea that students ought to write philosophy papers “entirely on their own.” For the more pessimistic, LLMs merely facilitate plagiarism. We believe that these controversies neglect a more basic crisis. We argue that, because one can, with minimal philosophical effort, use LLMs to produce outputs that at least “look like…Read more
  •  6
    Many philosophers have argued that large language models (LLMs) subvert the traditional undergraduate philosophy paper. For the enthusiastic, LLMs merely subvert the traditional idea that students ought to write philosophy papers “entirely on their own.” For the more pessimistic, LLMs merely facilitate plagiarism. We believe that these controversies neglect a more basic crisis. We argue that, because one can, with minimal philosophical effort, use LLMs to produce outputs that at least “look like…Read more