•  160
    Apart from what (little) OpenAI may be concealing from us, we all know (roughly) how ChatGPT works (its huge text database, its statistics, its vector representations, and their huge number of parameters, its next-word training, and so on). But none of us can say (hand on heart) that we are not surprised by what ChatGPT has proved to be able to do with these resources. This has even driven some of us to conclude that ChatGPT actually understands. It is not true that it understands. But it is als…Read more
  • Grounding Symbolic Representation in Categorical Perception
    Dissertation, Princeton University. 1992.
    How do internal symbols become connected to the object they stand for?$\sp1$ A symbol system is a set of physical objects or states and the formal rules for manipulating them. The rules are syntactic, operating only on the shapes of the symbols, not their meanings. Yet the symbol combinations can be given a systematic interpretation or states of affairs ). These meanings, however, are not "grounded"; they derive from the mind of the interpreter of the symbols. How can the meanings of symbols be …Read more
  •  15
    The evolu on of language made it possible for us to think aloud, share our thoughts, pass them on by word‐of‐mouth.
  •  39
    Language and the game of life
    Behavioral and Brain Sciences 28 (4): 497-498. 2005.
    Steels & Belpaeme's (S&B's) simulations contain all the right components, but they are put together wrongly. Color categories are unrepresentative of categories in general and language is not merely naming. Language evolved because it provided a powerful new way to acquire categories (through instruction, rather than just the old way of other species, through trial-and-error experience). It did not evolve so that multiple agents looking at the same objects could let one another know which of the…Read more
  •  257
    Explaining the mind by building machines with minds runs into the other-minds problem: How can we tell whether any body other than our own has a mind when the only way to know is by being the other body? In practice we all use some form of Turing Test: If it can do everything a body with a mind can do such that we can't tell them apart, we have no basis for doubting it has a mind. But what is "everything" a body with a mind can do? Turing's original "pen-pal" version (the TT) only tested linguis…Read more
  •  532
    The usual way to try to ground knowing according to contemporary theory of knowledge is: We know something if (1) it’s true, (2) we believe it, and (3) we believe it for the “right” reasons. Floridi proposes a better way. His grounding is based partly on probability theory, and partly on a question/answer network of verbal and behavioural interactions evolving in time. This is rather like modeling the data-exchange between a data-seeker who needs to know which button to press on a food-dispenser…Read more