•  63
    We examined the role of fitness, commonly assumed without proof to be conferred by the mastery of language, in shaping the dynamics of language evolution. To that end, we introduced island migration (a concept borrowed from population genetics) into the shared lexicon model of communication (Nowak et al., 1999). The effect of fitness linear in language coherence was compared to a control condition of neutral drift. We found that in the neutral condition (no coherence-dependent fitness) even a small…Read more
  •  61
    We compare our model of unsupervised learning of linguistic structures, ADIOS [1, 2, 3], to some recent work in computational linguistics and in grammar theory. Our approach resembles the Construction Grammar in its general philosophy (e.g., in its reliance on structural generalizations rather than on syntax projected by the lexicon, as in the current generative theories), and the Tree Adjoining Grammar in its computational characteristics (e.g., in its apparent affinity with Mildly Context Sensi…Read more
  •  14
    Shape representation by second-order isomorphism and the chorus model: SIC
    Behavioral and Brain Sciences 21 (4): 484-493. 1998.
    Proximal mirroring of distal similarities is, at present, the only solution to the problem of representation that is both theoretically sound (for reasons discussed in the target article) and practically feasible (as attested by the performance of the Chorus model). Augmenting the latter by a capability to refer selectively to retinotopically defined object fragments should lead to a comprehensive theory of shape processing.
  • On what it means to see, and what we can do about it
    In S. Dickinson, A. Leonardis, B. Schiele & M. J. Tarr (eds.), Object Categorization: Computer and Human Vision Perspectives, Cambridge University Press. 2008.
  •  22
    We describe a new approach to the visual recognition of cursive handwriting. An effort is made to attain humanlike performance by using a method based on pictorial alignment and on a model of the process of handwriting
  •  37
    The publication in 1982 of David Marr’s Vision has delivered a singular boost and a course correction to the science of vision. Thirty years later, cognitive science is being transformed by the new ways of thinking about what it is that the brain computes, how it does that, and, most importantly, why cognition requires these computations and not others. This ongoing process still owes much of its impetus and direction to the sound methodology, engaging style, and unique voice of Marr’s Vision
  •  18
    Two of the premises of the target paper -- surface reconstruction as the goal of early vision, and inaccessibility of intermediate stages in the process presumably leading to such reconstruction -- are questioned and found wanting.
  •  61
    We describe a linguistic pattern acquisition algorithm that learns, in an unsupervised fashion, a streamlined representation of corpus data. This is achieved by compactly coding recursively structured constituent patterns, and by placing strings that have an identical backbone and similar context structure into the same equivalence class. The resulting representations constitute an efficient encoding of linguistic knowledge and support systematic generalization to unseen sentences
  •  96
    How representation works is more important than what representations are
    Behavioral and Brain Sciences 18 (4): 630-631. 1995.
    A theory of representation is incomplete if it states “representations areX” whereXcan be symbols, cell assemblies, functional states, or the flock of birds fromTheaetetus, without explaining the nature of the link between the universe ofXs and the world. Amit's thesis, equating representations with reverberations in Hebbian cell assemblies, will only be considered a solution to the problem of representation when it is complemented by a theory of how a reverberation in the brain can be a represe…Read more
  •  50
    We report a quantitative analysis of the cross-utterance coordination observed in child-directed language, where successive utterances often overlap in a manner that makes their constituent structure more prominent, and describe the application of a recently published unsupervised algorithm for grammar induction to the largest available corpus of such language, producing a grammar capable of accepting and generating novel wellformed sentences. We also introduce a new corpus-based method for asse…Read more
  •  7
    Things are what they seem
    Behavioral and Brain Sciences 21 (1): 25-25. 1998.
    The learnability of features and their dependence on task and context do not rule out the possibility that primitives used for constructing new features are as small as pixels, nor that they are as large as object parts, or even entire objects. In fact, the simplest approach to feature acquisition may be to treat objects not as if they are composed of unknown primitives according to unknown rules, but rather as if they are what they seem: patterns of atomic features, standing in various similari…Read more
  •  44
    Beer ’s paper devotes much energy to buttressing the walls of Castle Dynamic and dredging its moat in the face of what some of its dwellers perceive as a besieging army chanting “no cognition without representation”. The divide is real, as attested by the contrast between titles such as “Intelligence without representation” and “In defense of representation”, to pick just one example from each side. It is, however, not too late for people from both sides of the moat to meet on the drawbridge and…Read more
  •  60
    We compare our model of unsupervised learning of linguistic structures, ADIOS [1], to some recent work in computational linguistics and in grammar theory. Our approach resembles the Construction Grammar in its general philosophy (e.g., in its reliance on structural generalizations rather than on syntax projected by the lexicon, as in the current generative theories), and the Tree Adjoining Grammar in its computational characteristics (e.g., in its apparent affinity with Mildly Context Sensitive L…Read more
  •  47
    Learn Locally, Act Globally: Learning Language from Variation Set Cues
    with Luca Onnis and Heidi R. Waterfall
    Cognition 109 (3): 423. 2008.
  •  60
  •  76
    On the virtues of going all the way
    with Elise M. Breen
    Behavioral and Brain Sciences 22 (4): 614-614. 1999.
    Representational systems need to use symbols as internal stand-ins for distal quantities and events. Barsalou's ideas go a long way towards making the symbol system theory of representation more appealing, by delegating one critical part of the representational burden to image-like entities. The target article, however, leaves the other critical component of any symbol system theory underspecified. We point out that the binding problem can be alleviated if a perceptual symbol system is made to r…Read more
  •  4
    Vision, hyperacuity
    with Yair Weiss
    In Michael A. Arbib (ed.), Handbook of Brain Theory and Neural Networks, Mit Press. pp. 1009--1012. 1995.
  •  36
    (a) Learn a grammar GA for the source language (A). (b) Estimate a structural statistical language model SSLMA for (A). Given a grammar (consisting of terminals and nonterminals) and a partial sentence (sequence of terminals (t1 . . . ti)), an SSLM assigns probabilities to the possible choices of the next terminal ti+1.
  •  74
    Towards structural systematicity in distributed, statically bound visual representations
    with Nathan Intrator
    Cognitive Science 23 (1): 73-110. 2003.
    The problem of representing the spatial structure of images, which arises in visual object processing, is commonly described using terminology borrowed from propositional theories of cognition, notably, the concept of compositionality. The classical propositional stance mandates representations composed of symbols, which stand for atomic or composite entities and enter into arbitrarily nested relationships. We argue that the main desiderata of a representational system — productivity and systema…Read more
  •  50
    Generative grammar with a human face?
    Behavioral and Brain Sciences 26 (6): 675-676. 2003.
    The theoretical debate in linguistics during the past half-century bears an uncanny parallel to the politics of the (now defunct) Communist Bloc. The parallels are not so much in the revolutionary nature of Chomsky's ideas as in the Bolshevik manner of his takeover of linguistics (Koerner 1994) and in the Trotskyist (“permanent revolution”) flavor of the subsequent development of the doctrine of Transformational Generative Grammar (TGG) (Townsend & Bever 2001, pp. 37–40). By those standards, Jac…Read more
  •  16
    Brahe, looking for Kepler
    Behavioral and Brain Sciences 23 (4): 538-540. 2000.
    Arbib, Érdi, and Szentágothai's book should be a required reading for any serious student of the brain. The scope and the accessibility of its presentation of the neurobiological data (especially the functional anatomy of select parts of the central nervous system) more than make up for the peculiarities of the theoretical stance it adopts.
  •  49
    We outline an unsupervised language acquisition algorithm and offer some psycholinguistic support for a model based on it. Our approach resembles the Construction Grammar in its general philosophy, and the Tree Adjoining Grammar in its computational characteristics. The model is trained on a corpus of transcribed child-directed speech (CHILDES). The model’s ability to process novel inputs makes it capable of taking various standard tests of English that rely on forced-choice judgment and on magn…Read more