•  18
    Two of the premises of the target paper -- surface reconstruction as the goal of early vision, and inaccessibility of intermediate stages in the process presumably leading to such reconstruction -- are questioned and found wanting.
  •  37
    The publication in 1982 of David Marr’s Vision has delivered a singular boost and a course correction to the science of vision. Thirty years later, cognitive science is being transformed by the new ways of thinking about what it is that the brain computes, how it does that, and, most importantly, why cognition requires these computations and not others. This ongoing process still owes much of its impetus and direction to the sound methodology, engaging style, and unique voice of Marr’s Vision
  •  96
    How representation works is more important than what representations are
    Behavioral and Brain Sciences 18 (4): 630-631. 1995.
    A theory of representation is incomplete if it states “representations areX” whereXcan be symbols, cell assemblies, functional states, or the flock of birds fromTheaetetus, without explaining the nature of the link between the universe ofXs and the world. Amit's thesis, equating representations with reverberations in Hebbian cell assemblies, will only be considered a solution to the problem of representation when it is complemented by a theory of how a reverberation in the brain can be a represe…Read more
  •  61
    We describe a linguistic pattern acquisition algorithm that learns, in an unsupervised fashion, a streamlined representation of corpus data. This is achieved by compactly coding recursively structured constituent patterns, and by placing strings that have an identical backbone and similar context structure into the same equivalence class. The resulting representations constitute an efficient encoding of linguistic knowledge and support systematic generalization to unseen sentences
  •  50
    We report a quantitative analysis of the cross-utterance coordination observed in child-directed language, where successive utterances often overlap in a manner that makes their constituent structure more prominent, and describe the application of a recently published unsupervised algorithm for grammar induction to the largest available corpus of such language, producing a grammar capable of accepting and generating novel wellformed sentences. We also introduce a new corpus-based method for asse…Read more
  •  7
    Things are what they seem
    Behavioral and Brain Sciences 21 (1): 25-25. 1998.
    The learnability of features and their dependence on task and context do not rule out the possibility that primitives used for constructing new features are as small as pixels, nor that they are as large as object parts, or even entire objects. In fact, the simplest approach to feature acquisition may be to treat objects not as if they are composed of unknown primitives according to unknown rules, but rather as if they are what they seem: patterns of atomic features, standing in various similari…Read more
  •  44
    Beer ’s paper devotes much energy to buttressing the walls of Castle Dynamic and dredging its moat in the face of what some of its dwellers perceive as a besieging army chanting “no cognition without representation”. The divide is real, as attested by the contrast between titles such as “Intelligence without representation” and “In defense of representation”, to pick just one example from each side. It is, however, not too late for people from both sides of the moat to meet on the drawbridge and…Read more
  •  60
    We compare our model of unsupervised learning of linguistic structures, ADIOS [1], to some recent work in computational linguistics and in grammar theory. Our approach resembles the Construction Grammar in its general philosophy (e.g., in its reliance on structural generalizations rather than on syntax projected by the lexicon, as in the current generative theories), and the Tree Adjoining Grammar in its computational characteristics (e.g., in its apparent affinity with Mildly Context Sensitive L…Read more
  •  47
    Learn Locally, Act Globally: Learning Language from Variation Set Cues
    with Luca Onnis and Heidi R. Waterfall
    Cognition 109 (3): 423. 2008.
  •  60
  •  76
    On the virtues of going all the way
    with Elise M. Breen
    Behavioral and Brain Sciences 22 (4): 614-614. 1999.
    Representational systems need to use symbols as internal stand-ins for distal quantities and events. Barsalou's ideas go a long way towards making the symbol system theory of representation more appealing, by delegating one critical part of the representational burden to image-like entities. The target article, however, leaves the other critical component of any symbol system theory underspecified. We point out that the binding problem can be alleviated if a perceptual symbol system is made to r…Read more
  •  36
    (a) Learn a grammar GA for the source language (A). (b) Estimate a structural statistical language model SSLMA for (A). Given a grammar (consisting of terminals and nonterminals) and a partial sentence (sequence of terminals (t1 . . . ti)), an SSLM assigns probabilities to the possible choices of the next terminal ti+1.
  •  4
    Vision, hyperacuity
    with Yair Weiss
    In Michael A. Arbib (ed.), Handbook of Brain Theory and Neural Networks, Mit Press. pp. 1009--1012. 1995.
  •  50
    Generative grammar with a human face?
    Behavioral and Brain Sciences 26 (6): 675-676. 2003.
    The theoretical debate in linguistics during the past half-century bears an uncanny parallel to the politics of the (now defunct) Communist Bloc. The parallels are not so much in the revolutionary nature of Chomsky's ideas as in the Bolshevik manner of his takeover of linguistics (Koerner 1994) and in the Trotskyist (“permanent revolution”) flavor of the subsequent development of the doctrine of Transformational Generative Grammar (TGG) (Townsend & Bever 2001, pp. 37–40). By those standards, Jac…Read more
  •  74
    Towards structural systematicity in distributed, statically bound visual representations
    with Nathan Intrator
    Cognitive Science 23 (1): 73-110. 2003.
    The problem of representing the spatial structure of images, which arises in visual object processing, is commonly described using terminology borrowed from propositional theories of cognition, notably, the concept of compositionality. The classical propositional stance mandates representations composed of symbols, which stand for atomic or composite entities and enter into arbitrarily nested relationships. We argue that the main desiderata of a representational system — productivity and systema…Read more
  •  16
    Brahe, looking for Kepler
    Behavioral and Brain Sciences 23 (4): 538-540. 2000.
    Arbib, Érdi, and Szentágothai's book should be a required reading for any serious student of the brain. The scope and the accessibility of its presentation of the neurobiological data (especially the functional anatomy of select parts of the central nervous system) more than make up for the peculiarities of the theoretical stance it adopts.
  •  50
    We outline an unsupervised language acquisition algorithm and offer some psycholinguistic support for a model based on it. Our approach resembles the Construction Grammar in its general philosophy, and the Tree Adjoining Grammar in its computational characteristics. The model is trained on a corpus of transcribed child-directed speech (CHILDES). The model’s ability to process novel inputs makes it capable of taking various standard tests of English that rely on forced-choice judgment and on magn…Read more
  •  38
    Word recognition is the Petri dish of the cognitive sciences. The processes hypothesized to govern naming, identifying and evaluating words have shaped this field since its origin in the 1970s. Techniques to measure lexical processing are not just the back-bone of the typical experimental psychology laboratory, but are now routinely used by cognitive neuroscientists to study brain processing and increasingly by social and clinical psychologists (Eder, Hommel, and De Houwer 2007). Models develope…Read more
  •  42
    Idealized mo dels of receptive elds (RFs) can be used as building blocks for the creation of p owerful distributed computation systems. The present rep ort concentrates on inv estigating the utility of collections of RFs in representing 3D objects under changing viewing conditions. The main requirement in this task is that the pattern of activity of RFs vary as little as p ossible when the object and the camera move relative to each other. I propose a method for representing objects by RF activi…Read more
  •  42
    A Swan, and Pike, and a crawfish walk into a bar
    Journal of Experimental and Theoretical Ai 20 261-268. 2008.
    The three commentaries of Van Orden, Spivey and Anderson, and Dietrich (with Markman’s as a backdrop) form a tableau that reminds me of a fable by Ivan Andreevich Krylov (1769 - 1844), in which a swan, a pike, and a crawfish undertake jointly to move a cart laden with goods. What transpires then is not unexpected: the swan strives skyward, the pike pulls toward the river, and the crawfish scrambles backward. The call for papers for the present ecumenically minded special issue of JETAI was designe…Read more
  •  38
    Neural spaces: A general framework for the understanding of cognition?
    Behavioral and Brain Sciences 24 (4): 664-665. 2001.
    A view is put forward, according to which various aspects of the structure of the world as internalized by the brain take the form of “neural spaces,” a concrete counterpart for Shepard's “abstract” ones. Neural spaces may help us understand better both the representational substrate of cognition and the processes that operate on it. [Shepard].
  •  35
    Variation set structure — partial alignment of successive utterances in child-directed speech — has been shown to correlate with progress in the acquisition of syntax by children. The present study demonstrates that arranging a certain proportion of utterances in a training corpus in variation sets facilitates word segmentation and phrase structure learning in miniature artifi- cial languages by adults. Our findings have implications for understanding the mechanisms of L1 acquisition by children, …Read more