•  52
    The distributional principle according to which morphemes that occur in identical contexts belong, in some sense, to the same category [1] has been advanced as a means for extracting syntactic structures from corpus data. We extend this principle by applying it recursively, and by using mutual information for estimating category coherence. The resulting model learns, in an unsupervised fashion, highly structured, distributed representations of syntactic knowledge from corpora. It also exhibits p…Read more
  •  98
    Representation, similarity, and the chorus of prototypes
    Minds and Machines 5 (1): 45-68. 1995.
    It is proposed to conceive of representation as an emergent phenomenon that is supervenient on patterns of activity of coarsely tuned and highly redundant feature detectors. The computational underpinnings of the outlined concept of representation are (1) the properties of collections of overlapping graded receptive fields, as in the biological perceptual systems that exhibit hyperacuity-level performance, and (2) the sufficiency of a set of proximal distances between stimulus representations fo…Read more
  •  31
    Juvenile zebra finches learn the underlying structural regularities of their fathers’ song
    with Otília Menyhart, Oren Kolodny, Michael H. Goldstein, and Timothy J. DeVoogd
    Frontiers in Psychology 6. 2015.
  •  172
    Computing the Mind: How the Mind Really Works
    Oxford University Press. 2008.
    The account that Edelman gives in this book is accessible, yet unified and rigorous, and the big picture he presents is supported by evidence ranging from ...
  •  84
    differentiaily rated pairwise similarity when confronted with two pairs of objects, each revolving in a separate window on a computer screen. Subject data were pooled using individually weighted MDS (ref. 11; in all the experiments, the solutions were consistent among subjects). In each trial, the subject had to select among two pairs of shapes the one consisting of the most similar shapes. The subjects were allowed to respond at will; most responded within 10 sec. Proximity (that is, perceived …Read more
  •  39
    The statistical structure of a class of objects such as human faces can be exploited to recognize familiar faces from novel viewpoints and under variable illumination conditions. We present computational and psychophysical data concerning the extent to which class-based learning transfers or generalizes within the class of faces. We rst examine the computational prerequisite for generalization across views of novel faces, namely, the similarity of di erent faces to each other. We next describe t…Read more
  •  82
    Construction-based approaches to syntax (Croft, 2001; Goldberg, 2003) posit a lexicon populated by units of various sizes, as envisaged by (Langacker, 1987). Constructions may be specified completely, as in the case of simple morphemes or idioms such as take it to the bank, or partially, as in the expression what’s X doing Y?, where X and Y are slots that admit fillers of particular types (Kay and Fillmore, 1999). Constructions offer an intriguing alternative to traditional rule-based syntax by hi…Read more
  •  40
    Converging evidence from anatomical studies (Maunsell, 1983) and functional analyses (Hubel & Wisesel, 1968) of the nervous system suggests that the feed-forward pathway of the mammalian perceptual system follows a largely hierarchic organization scheme. This may be because hierarchic structures are intrinsically more viable and thus more likely to evolve (Simon, 2002). But it may also be because objects in our environment have a hierarchic structure and the perceptual system has evolved to matc…Read more
  •  66
    We examined the role of fitness, commonly assumed without proof to be conferred by the mastery of language, in shaping the dynamics of language evolution. To that end, we introduced island migration (a concept borrowed from population genetics) into the shared lexicon model of communication (Nowak et al., 1999). The effect of fitness linear in language coherence was compared to a control condition of neutral drift. We found that in the neutral condition (no coherence-dependent fitness) even a small…Read more
  •  64
    We compare our model of unsupervised learning of linguistic structures, ADIOS [1, 2, 3], to some recent work in computational linguistics and in grammar theory. Our approach resembles the Construction Grammar in its general philosophy (e.g., in its reliance on structural generalizations rather than on syntax projected by the lexicon, as in the current generative theories), and the Tree Adjoining Grammar in its computational characteristics (e.g., in its apparent affinity with Mildly Context Sensi…Read more
  •  14
    Shape representation by second-order isomorphism and the chorus model: SIC
    Behavioral and Brain Sciences 21 (4): 484-493. 1998.
    Proximal mirroring of distal similarities is, at present, the only solution to the problem of representation that is both theoretically sound (for reasons discussed in the target article) and practically feasible (as attested by the performance of the Chorus model). Augmenting the latter by a capability to refer selectively to retinotopically defined object fragments should lead to a comprehensive theory of shape processing.
  • On what it means to see, and what we can do about it
    In S. Dickinson, A. Leonardis, B. Schiele & M. J. Tarr (eds.), Object Categorization: Computer and Human Vision Perspectives, Cambridge University Press. 2008.
  •  22
    We describe a new approach to the visual recognition of cursive handwriting. An effort is made to attain humanlike performance by using a method based on pictorial alignment and on a model of the process of handwriting
  •  40
    The publication in 1982 of David Marr’s Vision has delivered a singular boost and a course correction to the science of vision. Thirty years later, cognitive science is being transformed by the new ways of thinking about what it is that the brain computes, how it does that, and, most importantly, why cognition requires these computations and not others. This ongoing process still owes much of its impetus and direction to the sound methodology, engaging style, and unique voice of Marr’s Vision
  •  18
    Two of the premises of the target paper -- surface reconstruction as the goal of early vision, and inaccessibility of intermediate stages in the process presumably leading to such reconstruction -- are questioned and found wanting.
  •  64
    We describe a linguistic pattern acquisition algorithm that learns, in an unsupervised fashion, a streamlined representation of corpus data. This is achieved by compactly coding recursively structured constituent patterns, and by placing strings that have an identical backbone and similar context structure into the same equivalence class. The resulting representations constitute an efficient encoding of linguistic knowledge and support systematic generalization to unseen sentences
  •  96
    How representation works is more important than what representations are
    Behavioral and Brain Sciences 18 (4): 630-631. 1995.
    A theory of representation is incomplete if it states “representations areX” whereXcan be symbols, cell assemblies, functional states, or the flock of birds fromTheaetetus, without explaining the nature of the link between the universe ofXs and the world. Amit's thesis, equating representations with reverberations in Hebbian cell assemblies, will only be considered a solution to the problem of representation when it is complemented by a theory of how a reverberation in the brain can be a represe…Read more
  •  53
    We report a quantitative analysis of the cross-utterance coordination observed in child-directed language, where successive utterances often overlap in a manner that makes their constituent structure more prominent, and describe the application of a recently published unsupervised algorithm for grammar induction to the largest available corpus of such language, producing a grammar capable of accepting and generating novel wellformed sentences. We also introduce a new corpus-based method for asse…Read more
  •  7
    Things are what they seem
    Behavioral and Brain Sciences 21 (1): 25-25. 1998.
    The learnability of features and their dependence on task and context do not rule out the possibility that primitives used for constructing new features are as small as pixels, nor that they are as large as object parts, or even entire objects. In fact, the simplest approach to feature acquisition may be to treat objects not as if they are composed of unknown primitives according to unknown rules, but rather as if they are what they seem: patterns of atomic features, standing in various similari…Read more
  •  48
    Beer ’s paper devotes much energy to buttressing the walls of Castle Dynamic and dredging its moat in the face of what some of its dwellers perceive as a besieging army chanting “no cognition without representation”. The divide is real, as attested by the contrast between titles such as “Intelligence without representation” and “In defense of representation”, to pick just one example from each side. It is, however, not too late for people from both sides of the moat to meet on the drawbridge and…Read more
  •  61
    We compare our model of unsupervised learning of linguistic structures, ADIOS [1], to some recent work in computational linguistics and in grammar theory. Our approach resembles the Construction Grammar in its general philosophy (e.g., in its reliance on structural generalizations rather than on syntax projected by the lexicon, as in the current generative theories), and the Tree Adjoining Grammar in its computational characteristics (e.g., in its apparent affinity with Mildly Context Sensitive L…Read more
  •  48
    Learn Locally, Act Globally: Learning Language from Variation Set Cues
    with Luca Onnis and Heidi R. Waterfall
    Cognition 109 (3): 423. 2008.
  •  60
  •  77
    On the virtues of going all the way
    with Elise M. Breen
    Behavioral and Brain Sciences 22 (4): 614-614. 1999.
    Representational systems need to use symbols as internal stand-ins for distal quantities and events. Barsalou's ideas go a long way towards making the symbol system theory of representation more appealing, by delegating one critical part of the representational burden to image-like entities. The target article, however, leaves the other critical component of any symbol system theory underspecified. We point out that the binding problem can be alleviated if a perceptual symbol system is made to r…Read more
  •  4
    Vision, hyperacuity
    with Yair Weiss
    In Michael A. Arbib (ed.), Handbook of Brain Theory and Neural Networks, Mit Press. pp. 1009--1012. 1995.
  •  36
    (a) Learn a grammar GA for the source language (A). (b) Estimate a structural statistical language model SSLMA for (A). Given a grammar (consisting of terminals and nonterminals) and a partial sentence (sequence of terminals (t1 . . . ti)), an SSLM assigns probabilities to the possible choices of the next terminal ti+1.
  •  77
    Towards structural systematicity in distributed, statically bound visual representations
    with Nathan Intrator
    Cognitive Science 23 (1): 73-110. 2003.
    The problem of representing the spatial structure of images, which arises in visual object processing, is commonly described using terminology borrowed from propositional theories of cognition, notably, the concept of compositionality. The classical propositional stance mandates representations composed of symbols, which stand for atomic or composite entities and enter into arbitrarily nested relationships. We argue that the main desiderata of a representational system — productivity and systema…Read more
  •  52
    Generative grammar with a human face?
    Behavioral and Brain Sciences 26 (6): 675-676. 2003.
    The theoretical debate in linguistics during the past half-century bears an uncanny parallel to the politics of the (now defunct) Communist Bloc. The parallels are not so much in the revolutionary nature of Chomsky's ideas as in the Bolshevik manner of his takeover of linguistics (Koerner 1994) and in the Trotskyist (“permanent revolution”) flavor of the subsequent development of the doctrine of Transformational Generative Grammar (TGG) (Townsend & Bever 2001, pp. 37–40). By those standards, Jac…Read more
  •  16
    Brahe, looking for Kepler
    Behavioral and Brain Sciences 23 (4): 538-540. 2000.
    Arbib, Érdi, and Szentágothai's book should be a required reading for any serious student of the brain. The scope and the accessibility of its presentation of the neurobiological data (especially the functional anatomy of select parts of the central nervous system) more than make up for the peculiarities of the theoretical stance it adopts.
  •  50
    We outline an unsupervised language acquisition algorithm and offer some psycholinguistic support for a model based on it. Our approach resembles the Construction Grammar in its general philosophy, and the Tree Adjoining Grammar in its computational characteristics. The model is trained on a corpus of transcribed child-directed speech (CHILDES). The model’s ability to process novel inputs makes it capable of taking various standard tests of English that rely on forced-choice judgment and on magn…Read more