Published in The Philosophy of Artificial Intelligence, edited by Margaret Boden, in the Oxford Readings in Philosophy Series (Oxford University Press, 1990), pp. 368 - 440.

 

 

 

The Connectionist Construction of Concepts

Adrian Cussins

 

 

 

                                               Adrian@haecceia.com

 

 

 

 

 

 

 

Abstract

   The character of computational modelling of cognition depends on an underlying theory of representation.  Classical cognitive science has exploited the syntax/semantics theory of representation that derives from logic.  But this has had the consequence that the kind of psychological explanation supported by classical cognitive science is conceptualist: psychological phenomena are modelled in terms of relations that hold between concepts, and between the sensors/effectors and concepts.  This kind of explanation is inappropriate for the Proper Treatment of Connectionism (Smolensky 1988).  Is there an alternative theory of representation that retains the advantages of classical theory, but which does not force psychological explanation into the conceptualist mould?  I outline such an alternative by introducing an experience-based notion of nonconceptual content and by showing how a complex construction out of nonconceptual content can satisfy classical constraints on cognition.  The psychologically fundamental structure of cognition is not the structure that holds between concepts, but, rather, the structure within concepts.  The theory of the representational structure within concepts allows psychological phenomena to be explained as the progressive emergence of objectivity.  This can be modelled computationally by means of the computational processes of a perspective-dependence-reducing transformer.  This device may be thought of as a generalisation of a cognitive map, which includes the processes of map-formation and map use.  It forms computational structures which take nonconceptual contents as inputs and yield nonconceptual contents as outputs, but do so in a way which makes the resulting capacity of the system less and less dependent on any particular perspectives, yielding satisfactory performance from any point of view.

 

 


(1) Preface

1.1 Two Cognitive Science Frameworks for a Solution to the Problem of Embodied Cognition

            Cognitive science theories are theories of how physical systems think.  But a framework for cognitive science theorising must explain how it is possible for physical systems to think.   How can intentional phenomena be part of the same world which is described by the natural sciences?  How can there be organisms in the world which are capable of thinking about  the world?  How can the world include, as a part of itself, perspectives on the world?  I shall call the problem of possibility introduced by these questions, “the problem of embodied cognition".

 

            This article is about solutions to the problem of embodied cognition, which are both psychological and computational in character.  The Language of Thought (LOT) framework (Fodor 1976, 1987, and see §3) is exhibited as a candidate solution, and a rival cognitive science framework (“C3" for Connectionist Construction of Concepts) developed.  Both LOT and C3 serve also as methodologies for work in cognitive science, helping to direct research and to understand its significance.  Hence two contrasts emerge from the paper: a contrast between two general conceptions of the enterprise of cognitive science and a contrast between two ways of understanding how embodied cognition is possible.

 

1.2 A Theory of Representation as a Means for Deriving Psychological Explanations from Computational Models.

            A computational artefact which is held to have significance for psychological explanation is  a “model".  A model is just a physical object.  How are psychological explanations to be extracted from it?

 

            A cognitive science theory (“a theory") is a structured articulation of psychological explanations based on the functioning of the model.  Cognitive science theorising thus rests on a conception of the relation between computational artefacts and psychological explanations.  This relation is mediated by a theory of representation.

 

            A representation is itself a physical object which has two kinds of properties: properties of the representational “vehicle" and properties of the representational “content".  For example, a sequence of marks on a marking surface may be a representation.  The alphanumeric letter sequencing that these marks instantiate is a property of the representational vehicle.  And if the sequence happens to be the following, “Stanford is warmer than Oxford", then the content of the representation is that Stanford is warmer than Oxford.  The representational vehicle is the medium that carries the representational content as its message.

 

            In a model, the properties of a representational vehicle are all properties which have computational impact (for example, syntactic properties of LISP code).  They are properties which affect the computational functioning of the model.  And the properties which form the representational content are all properties which have psychological impact (for example, the task domain semantic properties of the LISP code).  They are properties which affect the psychological explanations which can be derived from the model.  So, on one side, the properties of a representation have a role in psychological explanation, and on the other side, they have a role in the computational functioning of the model.  It is the theory of representation which must tie together these two sets of properties, and hence establish the connection between computational functioning and psychological explanation.  It is a theory of representation which allows us to extract psychological import from computational physical objects; that gets a theory out of a model.

 

            If cognitive science involves getting a psychological theory out of a computational model, and if a theory of representation is the way to do this, then in order to understand the nature of cognitive science theorising we need to understand the relation between computation, representational vehicles, representational content, and psychological explanation.  The task is inherently multidisciplinary:

 

   

Figure 1: Four Levels of Analysis in Cognitive Science

 


1.3 Cognitive Science Frameworks

            A cognitive science framework consists of an analysis at each of the four levels in figure 1.  A central theme of this paper is that the analyses are not independent of each other.  For example, given a Von Neumann analysis of the computational architecture of a model, and a syntactic analysis of the model's representational vehicles, the theory of content for the model would have to be a semantic theory. And the choice of semantic contents entails that a particular kind of psychological explanation (conceptualist explanation) is derived from the model (as will be explained later).  Or, if one chooses a connectionist computational architecture, one may be lead, as for example Smolensky (1988) has been, to reject syntactic representational vehicles.  As is shown in this article, this consequence should itself have implications for the kinds of contents which connectionist representational vehicles can carry, and thus implications for the kinds of psychological explanation which can be extracted from connectionist models.  The analysis at each level constrains the analysis at the adjacent levels, so consequences can also be traced in a top-down direction.

 

            A cognitive science framework, then, involves a decision at each of these levels, so that the decision at each level is compatible with the decisions at all of the other levels.  Diagramming the possible choices at each level provides a representation of competing cognitive science frameworks.  The terms which denote the choices in figure 2 are explained in the body of the article.  But it may help to begin with the diagram:

 

 

 

Figure 2: LOT and C3 as Cognitive Science Frameworks

[Vertical arrows may be read as “constrains".  Horizontal arrows indicate some possible kinds or choices at each level.  The left hand side of the diagram is labelled as the set of choices which constitute the Language of Thought framework for cognitive science, and the right hand side is labelled as the set of choices which constitute the C3 framework for cognitive science.  The unfamiliar terms in the diagram are explained in the body of the text.]

 

1.4 The Strategy of the Paper

            An alternative kind of content from that presupposed by LOT is suggested, and the consequences of its use are considered for psychological explanation, for theories of representation, and for computational implementation.

 

            If the alternative framework is to be a genuine alternative to LOT, then it must provide a solution to the problem of embodied cognition; it must indicate how the physical embodiment of cognition is possible.  §(2) explains why this problem is a problem and provides a necessary and sufficient condition for its solution.

 

            In §(3), I explain why the LOT interpretation of cognitive science offers a candidate solution to the problem of embodied cognition.  I point out that this status depends on the computational use of the classical syntax/semantics theory of representation (S/S theory).

 

            The dependence of LOT theorising on the S/S theory of representation entails that psychological modelling based on LOT employs conceptual content.  So in §(4), I explain the distinction between conceptual content and nonconceptual content, give several examples which indicate the psychological need for a notion of nonconceptual content, and introduce a particular kind of nonconceptual content: construction-theoretic content (CTC).

 

            In §(5), I explore the cognitive psychological consequences of modelling in terms of conceptual content.  This establishes the contrast for §(6) to develop the idea that the nonconceptualist psychological task is to explain the cognitive emergence of objectivity.   §(7) makes the notion of objectivity more precise, and provides a way of assessing any system for the degree to which it is a concept-exercising system.  §(8) develops the connection between objectivity and perspective-independence.

 

            §(9) shows how a psycho-computational theory of map-like transformations of nonconceptual content can explain a decrease in the perspective-dependence of those abilities of the system by reference to which the system's contents are specified.  This explains how a cognitive science which models in terms of nonconceptual content can nevertheless satisfy conceptual constraints on cognition.  I suggest that the interesting cognitive employment of connectionism should not rest on the S/S theory, because the S/S theory entails conceptualist theorising, and connectionist cognitive modelling is suited to nonconceptualist psychological modelling.  I give some reason to think that C3 is as suited to (a way of cognitively using) connectionism as LOT is to classical AI.  The potential for Connectionism to use nonconceptual content shows why Fodor and Pylyshyn's (1988) criticism of connectionism is misplaced.  Connectionism can use the apparatus I have introduced to show how connectionist cognitive modelling can, in principle, respond to the problem of embodied cognition.

 

(2) The Problem of Embodied Cognition and the Construction Constraint on its Solution

2.1 The Problem of Embodied Cognition

            Consider the following way to bring out the problem of embodied cognition.

           

            Suppose, for the purposes of this article, that there is an irreducible and indispensable scientific level of cognitive explanation of human behaviour and that, even by the end of the next millennium, cognitive science will not have been made redundant by neurophysiology, quantum mechanics or some other noncognitive level of explanation.[1]

 

            Let us also accept naturalism: that all nonphysical properties are either reducible[2] to or must be realised [3] or implemented[4] in physical properties.  In other words, anything that has a causal power either has only physical causal powers or must be built out of physical components, so that it is possible, in principle, to understand why it is that something which is built physically like that (pointing to the physical science description), has those causal powers (pointing to the nonphysical description).  Naturalism does not require that nonphysical properties be - despite appearances - really physical properties (naturalism does not require reduction), but it does require that if we knew all the science there could be, we should not find it coincidental that certain physical objects have the irreducible nonphysical properties that they have.

 

            Whether or not human behaviour can be explained physiologically, humans behave as they do because of neurophysiological properties of humans.  But - given our first supposition - humans behave as they do because of certain irreducible cognitive properties.  Neurophysiological explanation and cognitive explanation are independent of one another, and—apart from cognitive or physiological breakdowns—are each complete[5], in their own terms.  How then can it be that both of the following are true: (1) cognitive explanations of behaviour are not causally redundant, and (2), the physical causation of the behaviour of a person marches in step with the cognitive causation of the behaviour of a person, so that a person is not torn apart in a tug of war between the physical and cognitive causal powers in the person.  I write as I do because of my beliefs about how best to communicate a philosophical problem to a readership that is partly nonphilosophical, but it is also true that I behave as I do because of certain neurophysiological causes within me.  How do we avoid the conclusion that there is a battle for the control of my hand?

 

            This problem is not resolved by supposing that cognitive explanation is non-causal, for the problem will then re-emerge as: how is it possible that the behaviour of a person, which is physically caused, is coherent from a cognitive perspective?  My writing as I am is cognitively predictable (whether or not it is cognitively caused), but had my neurophysiology been different in any of a very large number of imaginable ways, I would not be writing at all; because, for example, my hand would be motionless, or stuck behind my back.  How can my physiology keep on making my body do one of the limited range of things that it must do if it is to make cognitive sense?[6]

 

            In short, how can we understand cognition naturalistically without either the reduction of cognitive properties to noncognitive properties[7], or the elimination of cognitive properties[8], or the rejection of the scientific indispensability of cognitive properties?[9]  To understand this is to understand how cognition can be physically embodied, and thus to understand how to solve the problem of embodied cognition.

 

2.2 The Construction Constraint

            There is a naturalistic alternative to reduction, elimination and explanatory dispensability: the construction of cognitive properties out of noncognitive properties.  This idea may be introduced by an example.

 

            The notion of architectural functionality may be essential to the work of an architect, even though the notion cannot be reduced to any builder's notion of the arrangement of materials.  For example, an architect may need to work with the notion of an efficient corporate headquarters.  But this notion cannot be defined in terms of the spatial arrangement of commercial sizes of bricks, stone, metal, glass, plastic, wood and concrete.  An unspecifiable infinite set of different arrangements of builders' materials will be sufficient for an efficient corporate headquarters, given a particular company at a particular stage of development and a particular technological and ethnographic context.  Not only does an unspecifiable infinite set make reduction impossible, but which unspecifiable set this is will vary with the contextual parameters.  What is efficient for a small company may not be efficient for IBM in the eighties.  And what is efficient given telephones, electronic mail and fax machines would not be efficient in a technological context which predated these communication media.

 

            So, the notions in terms of which a client would specify a building to an architect cannot be reduced to notions that a builder must work with.  There are, thus, two distinct levels of notions (levels of description) which an architect must somehow bridge if he is to do his job.  For a layman, the architect's ability to produce a builder's specification from an architectural specification, or to know which architectural properties would be instantiated by a building built to a builder's specification, may appear unintelligible.  There is no further level of description that the architect employs.  Rather, in learning his job, an architect has gained an understanding of the architectural notions and the builders' notions that allows him to move back and forth between descriptions at each of these levels.  For the architect the relation between the two levels of description is Intelligible[10], not coincidental, whereas for the layman the relation is not Intelligible and so may appear coincidental.

 

            The architect's understanding may be more practical than theoretical. Finding Intelligible the gap between the architectural level of description, and the materials level of description may consist simply in the following skill: given any building specification, an architect should be able to tell[11] (and to know that he can tell) what architectural properties a building constructed like that would have, and given any architectural specification (eg. to provide functional office accommodation which is appropriate to the context of St. Pauls Cathedral), an architect should know how to put together building materials so as to satisfy the specification.  For the skilled architect, the gap between the two levels of description is Intelligible not coincidental: we may say that the architect — but not the layman — has the ability to construct architectural notions from building notions.

 

            Thus the relation of construction is an explanatory relation between levels which differs from the relation of reduction and from the relations of elimination and dispensability.  The construction constraint, applied generally, claims that any non-physical level of description and explanation should be constructable out of, ultimately, some physical level, in an analogous sense to the architect's construction of architectural notions out of building and materials notions.  If we have to construct some notion j out of physical notions then we need to be able to understand the nature of an object's being j in terms of a sequence of levels of description which are such that the top level makes manifest the j-ness of the object (as, for example, the architectural level makes manifest the corporate efficiency of a building), the bottom level is a physical level of description, and every two adjacent levels are such that the gap between them is Intelligible (not coincidental), as is the gap — for the architect — between the architectural and builders' levels.[12]

 

            Relying on this intuitive idea of the distinction between Intelligible gaps between levels and gaps which are coincidental or miraculous[13], the construction constraint can be stated more formally as follows:

 

A theory (or, rather, a framework of theories) of an ability j meets the construction constraint with respect to j if, and only if, it explains what it is for an organism to possess the ability in terms of the possession of a sequence of more than two levels of abilities, L1 - Ln, such that:

(i) the base level, L1, abilities are such that we do not understand why it is that an organism which has these abilities is thereby an organism which has the ability j, (that is, L1 and Ln are so related as to generate a miraculous coincidence problem about how they march in step), and,

(ii) the theory shows why it is that possession of the top level, Ln, abilities constitutes or manifests possession of j, and,

(iii) for each pair of levels, Li and Li+1 (1≤i<n),  Li and Li+1 are not related in such a way that they generate a miraculous coincidence problem about how they march in step (that is, the gap between Li and Li+1 is Intelligible).[14]

 

            The gap between the folk-psychological level and the neurophysiological level is not Intelligible in this way: I exploited this gap to make vivid the problem of embodied cognition as a tug-of-war for the control of my hand.  Given only our folk-psychological and neurophysiological knowledge, the marching in step of these two levels appears to be a miraculous coincidence.[15]  In contrast, because we know how to build machine languages out of electronic components and a high level language like LISP out of a machine language, the fact that the behaviour of the computer is determined by two levels of description (LISP and electronics) each complete in its own terms, does not appear as a coincidence.  The construction relation between LISP and electronics does not require that there be law-like relations between any two adjacent levels: there are no known laws connecting electronic and computational descriptions.  It requires just that somebody with knowledge of, for example, machine languages, LISP and electronics, should know roughly how to go about putting together electronics so as to build a LISP machine.  Not that one has to succeed every time; just that one shouldn't find it coincidental that what has been built functions as a computer.

    

            There is an interpretation of cognitive science according to which it has been exploring a construction[16] of conceptual abilities in terms of interposing representational and computational levels between the folk-psychological and neuro-physiological levels.  In the next section I show how the LOT theory can be seen as offering an account along these lines.  The rest of the paper develops an alternative account which is more appropriate to the use of connectionist computational architectures and is not subject to the same kind of difficulties as LOT.

 

(3) The Language of Thought as Concept Construction

            Remarkably, there has only been one serious attempt to solve the problem of embodied cognition by satisfying the construction constraint on the possession of concepts.[17]  This is the model which Fodor has called the Language of Thought (LOT), and he and others have defended at a philosophical level for over a decade.  The possibility of LOT turns crucially on the possibility of both the computational and the psychological application of a theory of representation which has been developed in logic since Frege.  This is the theory which characterises a representational system in terms of its combinatorial syntax and combinatorial semantics.

 

            The syntactic theory of a representational system provides a recursive specification of all and only the legal concatenations of the atomic representations of the system.  The semantic theory of a representational system provides an axiomatisable, recursive specification of the interpretation of all the legal representations.  If the representational system is to do any work it must be possible to define over the syntax either a proof theory for logical work, or a theory of procedural consequence for computational work, which specifies all of the legal transformations from each legal representation.  Much of the value of the logical tradition has rested on our being able to define purely syntactically a theory of legal transformation which nevertheless respects semantic constraints.

 

            The syntactic and semantic theories must be explanatorily independent, yet linked.  They must be independent in that it must be possible to understand how to apply the theory of legal transformation without understanding anything of the semantic theory; but they must be linked in that the syntactic application of the theory of legal transformation must not violate semantic constraints: traditionally, it must not transform a set of true premises into a false conclusion.  And, the syntactic and semantic theories must be so related that even if the representational system is not complete, a useful proportion of all of the semantically coherent transformations must be capturable by the syntactic application of the theory of legal transformation.

 

            What is so remarkable about LOT is its insight that the way to achieve the required Intelligible connection between the computational component and the psychological component of modelling in cognitive science is by developing a syntactic and semantic representational system for which the syntax is implemented computationally and the semantics[18] is appropriate for psychological explanation.  If this can be done, then the way to achieve the required relation between the computation and the psychology will simply follow from our understanding how to establish representational systems for which it is possible to specify a syntactic theory of legal transformation which respects semantic constraints.  What the S/S theory delivers is a semantic - independent level (syntax) that marches in step with the semantic level.  So if the syntax can be implemented computationally and the semantics can provide the basis for psychological explanation, then LOT will have shown how computational transitions can march in step with psychological transitions. 

 

            The syntactic characterisation of classical computational architectures is natural in part because these architectures themselves grew out of the logical tradition.  And the psychological employment of semantics is natural because of a tradition that also goes back to Frege; the tradition of taking the meaning of a sentence to be the object of the propositional attitude which is expressed by the concatenation of a psychological verb (“believes", “desires", ...), “that" and the sentence itself.

 

            Cognitive science, as interpreted by LOT, becomes distinct from both the theory of computation and the theory of psychology, because it is the attempt to establish empirically an intermediate level of explanation built on a system of cognitive representation which is such that, (1) the syntax is computationally implementable, (2), the semantics captures the important psychological generalisations, and (3), the theory of procedural consequence is consistent and usefully complete.  It is of course true that work in cognitive science will proceed without LOT, that many cognitive scientists disagree with LOT, and that numerous objections have been raised to LOT.  But LOT remains the only theory which gives clear criteria for the evaluation of the success of cognitive science, which is plausibly workable, and which shows how cognitive science may achieve the significance it hopes for: to explain how cognition can be physically embodied by constructing the psychological component of a cognitive science model out of the computational component.

 

            My purpose here is not to criticise LOT, but to understand how to develop an alternative theory which has a comparable explanatory scope.  LOT is so impressive because it rests on the remarkable tradition of S/S representational theory.  To develop an alternative solution to the problem of embodied cognition, we need to develop an alternative theory of representation.  An alternative theory of computation is not sufficient.

 

(4) Content, Conceptual Content and Nonconceptual Content

            I have suggested how it is, in the case of LOT, that providing the kind of explanation appropriate to the problem of embodied cognition depends on the theory of representation which LOT employs.  But it is also true that the choice of a representational theory determines the kind of psychological explanation which a model can offer.  This is so because the representational theory determines the kind of content which can be assigned to states of the model, and this, in turn, determines the kind of psychological explanation that the model can make available.  The link between representation and psychological explanation is content.

 

4.1 Introducing Content

            I will begin with a pocket account of the notions of content, conceptual content and nonconceptual content, before presenting a more careful analysis of them.

 

            Human persons act as they do, and thus often behave as they do, because some aspect of the world is presented to them in some manner.  The term “content", as I shall use it, refers, in the first instance, to the way in which some aspect of the world is presented to a subject; the way in which an object or property or state of affairs is given in, or presented to, experience or thought.  For example, I see the grey, plastic rectangular object in front of me as being a typing board, having the familiar Qwerty structure.  I also see it as being in front of me, and these facts are responsible for my hands moving in a certain way.  Representational states of mine have content in virtue of which they make the world accessible to me, guide my action, and (usually) are presented to me as something which is either correct or incorrect.  I shall speak of a representational state (or vehicle) having content. It may be that a single representational vehicle carries more than one content, even more than one kind of content.

 

            The theory of content - in terms of which we explain what content is - locates the notion with respect to our notions of experience, thought and the world.  But it is important to see that this is consistent with the notion of content being applied to (though not explained in terms of) states which are not states of an experiencing subject.[19]  There are derivative uses of the notion in application to the communicative products of cognition, such as speech, writing, and other sign-systems, or to non-conscious states of persons such as sub-personal information processing states, but these uses must ultimately be explained in terms of a theory of the primary application of content in cognitive experience.[20]

 

            Conceptual content is content which presents the world to a subject as the objective, human world about which one can form true or false judgements.  If there are other kinds of content, kinds of nonconceptual content, then that will be because there are ways in which the world can be presented to a subject of experience which do not make the objective, human world accessible to the subject.  It is not unnatural to suppose that there must be nonconceptual forms of content, because this is the kind of thing that we want to say about very young human infants (before the acquisition of the object concept, say), or very senile  people, or certain other animals.  It is compelling to think of these beings as having experience, yet they are unable to communicate thoughts to us; we are unable to understand — from the inside — how they are responding to the world; we are unable to impose our world on them.

 

            Conceptual content presents the world to a subject as divided up into objects, properties and situations: the components of truth conditions.  For example, my complex conceptual content (thought) that the old city wall is shrouded in mist today presents the world to me as being such that the state of affairs of the old city wall being shrouded in mist obtains today.  To understand this content I have to think of the world as consisting of the object, the old city wall, the property of being shrouded in mist, and the former satisfying the latter.  The possession of any content will involve carving up the world in one way or another.  There will be a notion of nonconceptual content if experience provides a way of carving up the world which is not a way of carving it up into objects, properties or situations (ie. the components of truth conditions).[21]

 

            It is natural to say that the possession of content consists in having a conception of the world as being such and such.  But the word “conception" is too closely related to “concept" for it to function neutrally as between conceptual and nonconceptual presentations of the world.  I shall say[22] that a content registers the world as being some way, and so ask, is there a way of registering the world which does not register it into objects, properties or situations?

 

4.2 Definitions of Conceptual and Nonconceptual Properties 

             I will begin a more careful analysis of these notions by introducing definitions of conceptual and nonconceptual properties, and then show how these definitions can be applied within the theory of content.[23]

 

A property is a conceptual property if, and only if, it is canonically characterised[24], relative to a theory, only by means of concepts which are such that an organism must have those concepts in order to satisfy the property.

A property is a nonconceptual property if, and only if, it is canonically characterised, relative to a theory, by means of concepts which are such that an organism need not have those concepts in order to satisfy the property.

 

Notice that the difference between these two definitions lies principally in the difference between the italicised “must have" in the first definition, and “need not have" in the second definition.

 

            Consider the property of thinking of someone as a bachelor.  A specification of what this property is will use the concepts *male*[25], *adult* and *unmarried*.  But nothing could satisfy the property unless it possessed[26] these concepts, since nothing would count as thinking of someone as a bachelor, unless he or she was able to think of the person as being male, adult and unmarried.  So the property of thinking of someone as a bachelor (unlike the property of being a bachelor) is a conceptual property.     

 

            Or consider the belief property of believing that the Stanford Campus is near here (where I think of the Stanford Campus as the Stanford Campus, rather than as the campus of the richest university in the West, and I think of here as here, rather than as 3333 Coyote Hill Road).  Given this, nothing could satisfy the property unless it possessed the concept of the Stanford Campus qua Stanford Campus.  Thus the property is canonically characterised only by means of concepts which an organism must have in order to satisfy the property, and is therefore a conceptual property.  Contrast the property of having an active hypothalamus.  Such a property is characterised by means of the concept *hypothalamus*, but an organism may satisfy the property without possessing this concept.  Therefore the property of having an active hypothalamus is a nonconceptual property.[27]

 

            Formally, the idea is that conceptual content is content which consists of conceptual properties, while nonconceptual content is content which consists of nonconceptual properties.  Can we give any substance to this formal idea?

 


4.3 The Application of the Definitions of Conceptual and Nonconceptual Properties within the Theory of Content

            In order to show that there is a notion of nonconceptual content we need to show that the definition of nonconceptual properties can be applied within the theory of content.  What does this mean?

 

            The definitions of conceptual and nonconceptual properties use the notion of canonical specification, for otherwise every property would be a nonconceptual property, since, trivially, every property — including conceptual properties — can be specified by means of concepts that the subject need not possess.  So we need to employ the notion of canonical specification.  If we are to apply these definitions within the theory of content then the notion of canonicality that we are interested in is the notion of being a canonical specification within the theory of content.  Certain specifications of a state or an activity are identified within a theory of content as being canonical when they are specifications generated by the theory in order to capture the distinctive way in which some aspect of the world is given to the subject of the state or activity.  So, as brought out by McDowell (1977), “'aphla' refers to aphla" would be canonical, but “'aphla' refers to ateb" would not be, even though both would be true, because aphla is ateb.  The notion of being canonical within the theory of content is parallel to the notion of being canonical in the theory of number, where the canonical specification of the number nine is not “the number of planets", but “nine".

 

4.31 The Case of Conceptual Content

4.311 The Notion of a Task Domain

            In order to understand how conceptual content works we need the notion of a task domain for a behaviour.  A task domain is a bounded domain of the world which is taken as already registered into a given organisation of a set of objects, properties or situations,[28] which contains no privileged point or points of view, and with respect to which the behaviour is to be evaluated.[29]   

 

            SHRDLU[30]'s blocks micro-world was SHRDLU's task domain.  The notion of a Model in formal semantics, and (often) the notion of a possible world in logic are notions of task domains.  Likewise, the performance of a chess computer is evaluated with respect to a chess task domain which consists of 64 squares categorised into two types, 32 pieces — each with an ownership property —, a legal starting position, three types of legal ending position, and a set of transformations from each legal position to all of the legal continuations from that position.  The computer's task domain excludes, for example, human emotions and plans, lighting conditions, reasons for, and the point of,  winning ...  What this means is that the performance of a chess playing computer is evaluated with respect to transformations of chess tokens on a 64-square board, but not with respect to its response to human emotions, the lighting conditions, the historical pattern of the game, or “its reasons for winning".  Moreover, because the domain is fixed so that certain situations are registered as wins for White, and certain others as wins for Black, the performance of the computer is not assessed with respect to its ability to transfer its knowledge to a different game, chess*, which is identical to chess except that those situations which are wins for White in chess, are wins for Black in chess*, and those situations which are wins for Black in chess are wins for White in chess*.[31]

 

            A task-domain, then, is a conceptualised region of the world which provides the context of evaluation (true / false, win / lose, true-in-a-model / false-in-a-model, adaptive / non-adaptive, successful / unsuccessful ...) for the performance of some system.  How is the notion of a task-domain connected to the notion of conceptual content?

 

4.312 The Specification of aContent by Concepts of the Task Domain

            Consider again the cognitive occurrence in me that we express in words as, “I am thinking that the Stanford Campus is near here".  This is a representational state of mine, and may possess more than one kind of content.[32]  What kind of content does the state carry?  There is a type of content (let us call it, “a content") which is stipulated within the theory of content[33] to be a kind of content that has determinate[34] truth conditions[35]; that is, whose evaluation as correct imposes a determinate condition on the world.  It follows that the linguistic expression, “that the Stanford Campus is near here" cannot fully capture the a content of the representational state, since this requires a fixed interpretation for “near" and “here".  (In order for the state to be a state with a content, we need to know what truth condition it imposes on the world.  But the words “here" and “near" do not tell us.)

 

            Now suppose that this state occurs as part of a project of mine in which I am planning how best to eat lunch given various parameters and constraints on me: time, money, hunger, distance to eating locations, speed of transport available to me, cost of food at various locations.  These parameters and constraints establish a task domain which fixes an interpretation for the terms “near" and “here": suppose that it follows from the time constraints on me, and my hunger, that I need to be eating within fifteen minutes.  Then “near" means: can be reached by a mode of transport available to me within fifteen minutes.  Likewise “here" will mean something like: the region between the spot on which I am standing and a line joining the embarkation points for all the modes of transport which are part of my planning domain.

 

            The interpretation of my cognitive occurrence as having a content depends on specifying the content by means of concepts of a task domain; in this case, the domain of my planning to eat lunch under various constraints and given various parameters.  In other words, the provision of determinate truth conditions for my cognitive state, required by the interpretation of it as having a content, entails that the content is canonically specified by means of concepts which reflect the objective structure of the task domain: its organisation into objects, properties and situations.  Since an organism can only grasp an a content if it grasps its truth conditions (or its contribution to the truth conditions of contents containing it), it follows that an organism which grasps such a content must know what the (relevant part of the) t-domain of the content is.  But a t-domain (unlike the world) is essentially conceptually structured, so there is no way of knowing what the t-domain of a content is without possessing the concepts in terms of which the t-domain is structured.  Hence possession of an a content requires possession of the concepts in terms of which it is canonically specified.  It follows that a content is a kind of content which consists of conceptual properties, as defined above.  That is, a content is conceptual content.

 

            The process of identification of a content as conceptual content may be mimicked in order to demonstrate a notion of nonconceptual content.  We must ask, is there a way to motivate in a similar fashion the application of the definition of nonconceptual properties within the theory of content?  In asking this, I am asking whether nonconceptual specifications of states or activities can ever be canonical within the theory of content.  Thus I am asking whether nonconceptual specifications of an activity can ever be required by a correct theory of content in order to capture the distinctive way in which some aspect of the world is given to the subject of the activity.

 

            We can clarify what is involved in doing this by setting out, as a summary of the above discussion, the different elements that I have used in motivating the definition of conceptual properties within the theory of content:

(1) The definition of conceptual properties (by stipulation);

(2) The claims that there is a constraint within the theory of content which requires determinate truth conditions[36], and that possession of content which satisfies this constraint requires knowledge (grasp) of its truth conditions (these claims are given by the theory of content, and are constitutive of this notion of content);

(3) A psychological state expressed linguistically as “thinking[37] that the Stanford Campus is near here", not yet analysed with respect to the kind of content that it has;

(4) The claim, argued in the text, that the interpretation of (3) under (2) requires the notion of a task domain and the specification of the content of (3) by means of concepts of the task domain.

(5) (4) results in the satisfaction of (1), hence the identification of content which satisfies the constraint in (2) as conceptual content.

 

The notion of a task domain provides the link between the philosophical notion of a content, and my stipulative definition of conceptual properties; a link which is needed to show that the analysis of a psychological state in terms of a content entails satisfaction of the definition of conceptual properties.

 

4.32 The Case of Nonconceptual Content

            I can show the need for nonconceptual content by showing that there are psychological states the full understanding of which requires a notion of content which cannot be analysed in this way; that is, which must be canonically specified by means of concepts that the subject need not have.  The discussion will have to parallel the discussion for the case of conceptual content, so we need a parallel for (1) - (5):

(1') The definition of nonconceptual properties (by stipulation);

(2') Some constitutive conditions on a kind of content, b content, which are provided by the theory of content, but which are different from the conditions in (2).

(3') Some psychological or representational state as yet unanalysed with respect to the kind of content it has.

(4') An argument for the claim that the interpretation of (3') under (2') requires the notion of some domain other than the task domain and the specification of the content of (3') by means of concepts of this domain.

(5') A demonstration that (4') results in satisfaction of (1'), hence the identification of b content as nonconceptual content.

 

            We already have (1').  What about (2')?

 

4.321 Cognitive Significance

            A good theory of content is answerable to various constraints.  For example, a good theory of content should be appropriate for use within a content-based scientific psychology, it should have resources to explain how certain contents have determinate truth conditions and a good theory of content should also capture cognitive significance, that is, the role that content plays with respect to perception, judgement and action.

 

            How can the theory of content accommodate cognitive significance? Frege's notion of sense[38] was introduced, in the first instance, to explain how certain identity statements could be informative.  For example, to learn that Hesperus = Hesperus is not to learn anything new, but to learn that Hesperus = Phosphorus may be to learn something of considerable significance, yet Hesperus is Phosphorus.  It follows that possession of the content expressed here by the word “Phosphorus" cannot consist just in the ability to think of the planet Venus, (specified no further than this), because just the same ability is associated with “Hesperus".  There is here a motivation for introducing a notion of content (sense) which differs from a purely referential notion of content (reference).  There is a content expressed by “Hesperus" which is different from the content expressed by “Phosphorus" because the former content plays a different role from the latter content in a person's judgements of the truth value of contents of the form “... = Hesperus".  Frege generalised this motivation into a criterion of identity for such contents (senses).[39]   We may generalise it still further to yield a generalised notion of sense which I call “b content", whose identity conditions are fixed, not just by its constitutive connections to judgement, but by its constitutive connections to perception, action and judgement.[40]  Possession of a particular b content requires possession of a contentful state which plays that role in the psychological economy of the subject which is constitutive of the b content.     

 

            A major success within recent work in the theory of content has been to show that there are indexical and demonstrative b contents that cannot be canonically specified, in the way appropriate to conceptual content, by means of any description.[41]  This has been achieved by showing that were a description — per impossibile — to provide canonical specification of the content, in the way appropriate to conceptual content, it would alter the cognitive significance of the content, that is, the character of its constitutive connections to action and judgement.  Since cognitive significance is constitutive of b content, it follows that this form of specification cannot canonically capture b contents.

 

            For example, Perry (1979) shows this for the indexical “I" and connections to action, and Peacocke (1986) shows it for demonstrative perceptual contents and connections to perception and judgement.  Perry's point is that the conceptual use of any descriptive canonical specification — *the x such that fx* — for the indexical content *I*, will alter the cognitive significance of the thought *I am y* by altering its constitutive connections to action.  The reason for this is that it is always possible that one may not realise that I am the x such that fx, so that even if one would act immediately on the basis of judging *I am y* (eg, *I am spilling sugar all over the supermarket floor*), one might not act on the basis of judging *the x such that fx is y*.

 

             Peacocke contrasts what a person knows when he or she knows the length of a wall in virtue of just having read an estate agent's handout, and what a person knows when he or she knows the length of a wall just in virtue of looking at it.  Frege's intuitive criterion of difference[42] for contents can be used to show that although both people know the length of the wall, neither knows what the other knows.  Thus suppose that my wife's and my cognitive states were identical except for the fact that I know what the length of the wall is just in virtue of having read the handout, and she knows what the length of the wall is just in virtue of having seen it.  But then, thinking of the length of the wall in only that way which is available to each of us, I may be agnostic about the thought *that length is greater than the length of our piano* (because, for example, we don't know how long in feet our piano is), whereas my wife will judge this thought to be true because, simply by looking, she can see that our piano will fit against the wall.  Therefore, the perceptual demonstrative b content differs from any descriptively specified conceptual content, and so cannot be canonically specified, in the way appropriate to conceptual contents, by means of any specification such as, “the person sees that distance-in-feet(a,b) = n" where a and b are the end-points of the wall.

 

            We could treat examples such as Perry's and Peacocke's in a way which was similar to my treatment of thinking that the Stanford Campus is near here — that is conceptually — by means of concepts of the respective task domains.  That would be, in effect, to characterise these indexical contents in a descriptive, conceptual fashion.[43]  But Perry's and Peacocke's arguments show that justice cannot be done, in such a way, to the cognitive significance of these contents.  So we have only to recognise a notion of content for which cognitive significance is essential, to see that there is a kind of content which cannot be canonically specified by means of concepts of the task domain. 

 

            The argument so far shows that there is a very large class of cognitive states (all states which contain indexical or demonstrative elements[44]) which have a kind of content (b content) for which the only canonical conceptual specification is the use of a simple demonstrative or indexical under the conditions of a shared perceptual environment or shared memory experience.  Such a specification is evidently useless for the construction-theoretic purposes of a scientific psychology since the only way the theorist can have to understand the nature of the content is either to share the experiential environment of the content, or draw on similar  experiential environments available to the theorist in memory experience.[45]  (Scientific psychology, here, is psychology which is aiming to solve the problem of embodied cognition, and which therefore is aiming to construct any explanatorily indispensable notion of content out of non-content involving levels of description[46]).  Yet this class of contents is particularly important for psychology, at least because of its direct connections to action and its crucial role in learning.  Is the theoretical psychologist therefore incapable of capturing those contents which are basic to our ability to act in the world and to learn from it?

 

            Only if the psychologist assumes that he or she must work with conceptual content.  The problem arises because there is no conceptual structure within the demonstrative or the indexical or the observational content which can be exploited to yield a canonical conceptual specification of the content which would be appropriate for the purposes of a scientific psychology.  But this doesn't exclude there being any nonconceptual structure within the content.  If we can make sense of this notion, then there is here an argument to show that much of the psychological life can only be captured by means of, and should, therefore, only be modelled in terms of, nonconceptual content.

 

4.322 The Notion of a Substrate Domain

            Abandon, then, the demand that every content must have its theoretical specification given in the way which is constitutive of conceptual content; that is, by means of concepts of the task domain.  What other theoretically adequate method of specification could there be?  I introduce below one kind of canonical nonconceptual specification. It is not necessarily the only kind[47], although I believe that it is the only kind in terms of which we can solve the problem of embodied cognition.

 

            It will help to consider the operation of an autonomous, mobile robot known as “Flakey" which lives at a research institute, SRI, in California[48].  Flakey navigates the corridors of SRI.  His task is to move up and down the corridors, avoiding hitting the walls, and to turn into particular doorways.

 

            In order to be able to behave flexibly in a range of task domains a system must be able to employ representations[49] of features which are special to the domain in which it happens to find itself.  For example, if the width of corridors varies in Flakey's environment, then Flakey will need to respond differentially to corridor width.  Given the kind of system that Flakey is, this will mean that Flakey will have to represent this variable.  The system need only not represent that which does not change throughout the career of the system.  So the greater the system's representational capacity, the greater its potential flexibility.  Should we suppose, therefore, that the cognitively ideal system would computationally represent — in the traditional AI style — all the facts there are?  That although nothing achieves this ideal, the closer one comes to it, the better one's cognitive capacities will be?

 

            To suppose this[50] is to miss an important distinction between two kinds of fact.  What I want to show is that computational representation of only one of these kinds of fact is required for the ideal Artificial Intelligence system.  Flakey is sometimes imagined to deliver pizza throughout SRI.  It might be that only one weight of pizza is allowed through the extensive security system, and that Flakey could therefore be built on the assumption that if something is recognised as a pizza, then the mobile arm needs to exert a certain force to lift it.  This would have the effect of “unburdening[51]" the representational capacities of Flakey, with respect to having to work out each time it was about to lift a pizza, how much force was required to lift it.  This connection could simply be built into the hardware.  However, the folks down at Hewlett Packard, intrigued by Flakey's growing reputation, might want to try him out on delivering pizza for them.  They would be sorely disappointed because, unfortunately for Flakey, the security system at HP labs lets all weights of pizza through.  Flakey was discovered to be throwing pizza around in a way not likely to impress DARPA[52].

 

            Indeed, DARPA could reasonably argue that this was a cognitive defect of Flakey's.  We treat intelligence in an open-ended way:  So-and-so may be great at chess, but if he can't learn to play Go, then we think him the less intelligent for it.  For Flakey, representation of pizza weight is required for acceptable, let alone ideal, cognition.

 

            But we shouldn't conclude therefore that to be truly intelligent Flakey must represent all the facts there are.  For example, it would be surprising if Flakey were to represent the distance between the sonar sensors at its base.  This is not only for the reason that this distance is a constant throughout Flakey's career, but, more importantly, because Flakey's own structure is not part of Flakey's task domain.  Flakey never has to manipulate the distance between his sonar sensors; this distance is not something with respect to which Flakey's performance will be evaluated.  Rather, it is part of Flakey's substrate of abilities in virtue of which Flakey has those corridor-movement behavioural capacities which he in fact has.  This distinction between task-domain (“t-domain") and the domain of the system's substrate of abilities (“s-domain") is essential to understanding what a flexible system is required to represent.  To be able to operate flexibly in a range of t-domains a system must be able to represent those features of a t-domain which vary, or may vary, within the range of t-domains.  But so long as the s-domain is outside this range, as it usually will be, a flexible system has no need to represent aspects of its s-domain.[53]

 

            My visual capacity may be quite superb, and open-ended: I can visually discriminate any kind of object, in an extensive range of conditions of illumination, and distances from me, and so forth.  But nobody would suggest that it is a defect of my visual capacity, that I am ignorant of the algorithms employed by my visual information processing system.  With respect to my personal level visual capacity, my sub-personal information processing capacities are part of the s-domain.[54]  Given a division between t-domain and s-domain in a particular case, performance in the task domain— even fully conceptual performance — does not require the possession of any concepts of the s-domain.

 

4.323 Specifying bContents by Concepts of the Substrate Domain

            As we saw, the notion of a t-domain provided the link between a content and the definition of conceptual properties.  Can the notion of the s-domain provide a parallel link between b content and the definition of nonconceptual properties?  An intelligent agent does not need to have concepts of its s-domain, so if b content can be canonically specified by reference to the objects and properties of the s-domain, we will have motivated a kind of content which is specified by means of concepts that the system or organism need not have.

 

            Consider the following quotation from Evans (1982, chapter 6):

 

What is involved in a subject's hearing a sound as coming from such and such a position in space? ...  When we hear a sound as coming from a certain direction, we do not have to think or calculate which way to turn our heads (say) in order to look for the source of the sound.  If we did have to do so, then it ought to be possible for two people to hear a sound as coming from the same direction and yet to be disposed to do quite different things in reacting to the sound, because of differences in their calculations.  Since this does not appear to make sense, we must say that having spatially significant perceptual information consists at least partly in being disposed to do various things.

 

When Evans asks, “what is involved in a subject's hearing a sound as coming from such and such a position in space?", he is asking about the nature of the content by which the subject is presented in experience with this aspect of the world.  Evidently the content is indexical or demonstrative since, were we to express the content in words, we would say that perception presents the sound as coming from “that location", or “from over there".  The conclusion drawn on the basis of Perry's and Peacocke's examples applies: there is no way to canonically specify this content as a conceptual content, if we wish to do theoretical justice to the cognitive significance of the content; in particular its direct connection to action.  What Evans adds, is, first, a further reason why this kind of content cannot be captured conceptually (no conceptual content can be necessarily linked to action as directly as certain b contents require), and, secondly, the suggestion that the way to capture the cognitive significance of the content is by reference to a way of moving in the world; the subject's ability to reach out and locate the object, or walk to the source of the sound, which the perceptual experience makes available.  At the place in the argument which we have now reached, it is this second idea which is important, because, for Evans's content, a way of moving in the world is part of the s-domain.

 

            Given our usual views about consciousness, the idea here can seem quite strange: it is the idea that certain contents consist in a means of finding one's way in the world (tracking the object, say) being available to the subject in his or her experience, even though it may not be available to the subject conceptually, and, indeed, the subject may be incapable of expressing in words what this way of moving is.[55]  My knowledge of where the sound is coming from consists in, say, knowledge of how I would locate the place; knowledge which is exhausted by what is available to me directly — without depending on any concepts — in experience.  I may have that knowledge even though I am unable to entertain any thoughts about the way of moving in question; I require no concepts of my ability to find my way in the environment, in order to have an experience whose content consists in presenting to me a way of moving.

 

            It may help to consider one of the most extreme cases of nonconceptual content (to which I will return in §(7)): the case of pain experience.  We have been taught in the philosophical tradition not to view pain experience as experience with any content at all; its function isn't to represent the world, we are told.  But the reason for this is not because pain experience isn't phenomenologically very similar to the experience of colour or shape of objects, but, rather, because we do not view the world as possessing various paining properties.  We say that the edge of my desk is coloured brown on the basis of a visual experience as of browness, but we do not say that the edge of my desk has a sharp paining on the basis of a tactile experience of a sharp pain.  I will give a reason in §(7) as to why this is the case, but for now the point is to think of experience as a spectrum of kinds of experience ranging from pain experience where we are not remotely inclined to attribute the experienced property to the world, through colour experience where we do attribute the experienced property to the world (but we get into some trouble for doing so — §(6)), to shape or motion experience.  Pain experience is just much less objective[56] than shape experience.  This will show up in the kinds of content that pain experience can have, as against the kinds of content that shape experience can have.  Pain experience never has conceptual content, but it doesn't follow that it has no content at all.  Pain experience presents the world as being painful; paining is made available to one in pain experience.  But we don't suppose that we need concepts of pain for this to be the case; we just have to be in pain, or to remember being in pain  In a similar way, experience can present a way of moving in the world, even though the subject of the experience has no concepts of ways of moving.

 

            Our kinaesthetic sense provides another example.  On the basis of kinaesthetic experience the subject knows how his body is arranged; how his hands are in relation to each other and to his head, for example.  But the person need have no concepts of this spatial arrangement in order to have this knowledge.  Rather, the knowledge consists in an experiential sensitivity to, for example, moving one's hands closer together, or bringing one's hands next to one's torso.  The capacities one has to rearrange one's body are directly present in kinaesthetic experience without having to possess any concepts of the arrangement of a body.

 

            Returning to the example from auditory experience, Evans's idea is that the spatial content of the auditory perception has to be specified in terms of a set of conceptually unmediated abilities to form judgements and to move in the egocentric space[57] around the organism. This is because the content consists in the experiential availability to the subject of a dispositional ability to move.  The experiential content of perception is specified in terms of certain fundamental skills which the organism possesses, “the ability to keep track of an object in a visual array, or to follow an instrument in a complex and evolving pattern of sound".  These are skills which belong to the subject's s-domain.  So, if Evans is right, this class of contents is canonically specified by reference to abilities which are part of the s-domain, and therefore by means of concepts which a subject need not have in order to grasp any member of this class of contents.  So the structure of the (conceptually atomic) indexical, demonstrative and observational contents of experience is the structure of their nonconceptual content.  b content is nonconceptual content.

 

            People often misunderstand this as a behaviouristic theory, so let me emphasise again that the claim is not, in the first instance, about the characterisation of a sub-personal[58] perceptual state of the organism.  The aim is to capture how the person's perceptual experience presents the world as being (ie. a genuine notion of personal level content).  The notion of nonconceptual content is a notion which must ultimately be explained in terms of what is available in experience.  If the content is canonically characterised as a complex disposition of some specified sort, then the claim is that this disposition is directly available to the person in his or her experience, and that the content of the experience consists in this availability.  But for a behaviourist, the notion of experience can have no explanatory role[59].

 

            In summary, then, I have discerned a constraint on content in terms of cognitive significance, rather than in terms of truth conditions; I have suggested in a Fregean spirit that we need to introduce a kind of content which is answerable to this constraint;  I have shown that this kind of content cannot be canonically specified in any way which is appropriate to conceptual content, and that it is therefore not a type of conceptual content; we have seen that we need to employ this kind of content to do full justice to any cognitive psychological state with indexical or demonstrative elements (most of our cognitive life);  that a plausible suggestion for how to canonically capture the content is by means of concepts of the s-domain; and that since a cognitive creature does not need to have concepts of its s-domain, I have shown that this kind of content satisfies the definition of nonconceptual properties, and is therefore a kind of nonconceptual content.[60]   I will call the kind of nonconceptual content which I have introduced, “construction-theoretic content (CTC)", because I shall go on to show how this kind of content can form the basis for a construction of conceptual capacities.

 

4.4 How Widespread is the Phenomenon of Nonconceptual Content?

            The content of certain conceptual states has only the structure of their nonconceptual content, and so can only be psychologically analysed in terms of their nonconceptual structure.  There are two levels of analysis of content, conceptual and nonconceptual, and it has been demonstrated that the psychological explanation of a certain portion of our cognitive life can only be given in terms of its nonconceptual structure.  It is irresistable to wonder, how widespread is this phenomenon?  Could it be that even for those areas of cognition where there is conceptual structure, the correct level of scientific psychological analysis is still in terms of its nonconceptual structure?  Is the psychological structure of cognition its nonconceptual structure?  I believe that the hypothesis that it is is the basis for a connectionist alternative to LOT.  But this is to run ahead of ourselves.[61]

 

            It will help to consider some other examples.   Evans quotes Charles Taylor as follows:

 

Our perceptual field has an orientational structure, a foreground and a background, an up and down...  This orientational structure marks our field as essentially that of an embodied agent.  It's not just that the field's perspective centres on where I am bodily — this by itself doesn't show that I am essentially agent.  But take the up-down directionality of the field.  What is it based on? Up and down are not simply related to my body — up is not just where my head is and down where my feet are.  For I can be lying down, or bending over, or upside down; and in all these cases 'up' in my field is not the direction of my head.  Nor are up and down defined by certain paradigm objects in the field, such as the earth or sky: the earth can slope for instance... Rather, up and down are related to how one would move and act in the field.

 

Taylor is here asking what the significance of our concept *up* consists in.  He considers three answers, two of which are: up is where my head is, and, up is where the sky is.  But the significance of our notion of up cannot consist in our grasp of the direction of our head or the direction of the sky, because, for example, we can perfectly correctly employ the concept *up* when we are lying down.  And so on.  Then Taylor offers a third answer, “up and down are related to how one would move and act in the field".  This immediately strikes one as a very different sort of answer from the first two answers that Taylor considers.  In the first two cases what is being offered is a traditional conceptual analysis; a definition, as we might define “bachelor" to refer to an unmarried adult male.  Where it is proper to give a traditional conceptual analysis, a person's understanding of the left hand side of the definition must consist in the cognitive availability of the conceptual structure which is displayed on the right hand side.  But Taylor's third answer is not a definition; it simply states that our possession of the concept *up* must be analysed in terms of certain basic, nonconceptual abilities that we possess, such as our ability to move and act in a coordinated way.  These basic abilities may be characterised by means of technical concepts (such as concepts of the way in which the gravitational force structures our field) which an organism need not possess in order to possess these basic abilities.  Taylor has hit upon the analysis of a concept in terms of its nonconceptual content.

 

            Or consider recognitional abilities in those cases (the majority) where recognition does not depend on the recognition of the object as *the x such that jx and yx* (for any concepts of properties *j* and *y*).  For example, my ability to recognise my wife's face as Charis's face is not an ability (even a sub-personal ability) to recognise the unique face with certain conceptual features (eg. Roman nose, distance between eyes being n inches ...).  When I think a perceptual-demonstrative thought of the form *That is Charis*, my cognitive state is not correctly reconstructable as involving the inference, that is the j person, the j person is Charis, so that is Charis.  In fact, my ability to recall a person's features (even a person whom I know very well) when not in their presence is extremely limited, but this in no way diminishes my ability to hold a particular person in memory.  (In an extreme case, I might not be able to recall a single perceptual feature of my wife, and yet be unrivalled in my ability to think singular thoughts about her).  So it cannot be that the capacity for me to hold someone in memory in the way required for me to have a singular thought about the person consists in my storing some set of conceptual features which, as it so happens, are uniquely satisfied in the world[62].

 

            What this suggests is that although there will be mental features in our theory of recognition, they won't be features whose analysis depends on a semantic account, ie. a semantic relation between the feature and some objective element of the appropriate ready-registered task domain.  Our ability to recognise massively outstrips our ability to recall, and cannot be analysed in terms of it.[63]  The suggested alternative is that our ability to recall objective features of the world is dependent on the structure of the nonconceptual content of our recognitional capacities; content specified in terms of basic spatial and temporal tracking and discriminatory skills which are required to find our way around the environment.

 

4.5 Nonconceptual Content and Representational Vehicles

            Equipped with the distinction between conceptual and nonconceptual content, we can return to the general argument that the kind of representational theory that a computational model employs determines the kind of content appropriate for that model, which in turn determines the kind of psychological explanation that the model can provide.  In the next section, I consider the kind of psychological explanation that requires conceptual content, and in §(6) I consider the kind of psychological explanation that nonconceptual content can make available.  But it will help first to move down the psycho-computational hierarchy one step to see the connection between the two kinds of content and two kinds of representational theory.

 

            § 3 explained that LOT's capacity to respond to the problem of embodied cognition depended on its use of the S/S theory of representation.  We are now in a position to add a feature to those features discerned in § (3) which are constitutive of S/S theory.  I have already pointed out that the level of semantics and the level of syntax are explanatorily independent of each other, in the sense that one does not have to know the semantic theory in order to understand what the theory of the syntax is saying, and vice-versa.  Syntax must respect semantic constraints, but the operations of proof theory, or procedural consequence, defined over syntax, are formal in that they are independent of semantic features.  All that I have said so far about semantics is that we often want a semantic theory to be a finitely axiomatisable recursive theory of semantic properties.   We can now see that a semantic theory is a theory of the relation between syntactic items and conceptual contents[64].  A semantic theory has the form that it has because the base axioms specify relations of reference or denotation to objects, properties, or situations of the task domain.  S/S theory is committed to conceptual content.

 

            What form would a representational theory for nonconceptual content take?  Notice, first, that the relation between vehicle and content will not be given by a semantic theory, since the contents are nonconceptual.  The contents carried by the vehicles cannot be given by referential or satisfaction (ie semantic) relations between the vehicles and the elements of the task domain.   Notice, secondly, that the relation between the representational vehicles of CTC, and CTC itself cannot be a semantic relation, because the level of CTC is not explanatorily independent of the level of the s-domain abilities of an organism in virtue of which the organism is able to find its way in its environment.  These s-domain abilities are the vehicles which carry CTC[65], but we saw that we could not understand how experience nonconceptually presents the world, without specifying such contents in terms of these very abilities.  The “syntax" and the “semantics" of nonconceptual content are not explanatorily independent, so they are not, strictly speaking, syntax and semantics.  The notion of nonconceptual content shows how there can be a radical alternative to S/S representational theory.  The value of this point is that when we come to consider Fodor and Pylyshyn's (1988) criticism of connectionism to the effect that connectionist causation is not syntactically systematic, we can agree with Fodor and Pylyshyn, but show how this fact redounds to the advantage of connectionism. 

           

(5) Conceptualist Theorising and Psychological Explanation

            The computational use of S/S theory entails the use of conceptual content, which in turn entails conceptualist theorising.  Conceptualist theorising is theorising about cognition, at the level of psychological explanation (see figure 2), in terms of conceptual properties.  The application of conceptual properties, as defined in §(4), to theorising about an organism's cognitive functioning requires that the organism be assumed to possess at least that set of basic concepts on which the analytic hierarchy of concepts[66] is to be founded.  This set of presupposed basic concepts will be taken to be either innate or acquired by non-psychological (eg. neuro-physiological) processes, or both.

 

            Logic has given us a way of understanding the atemporal logical relations which obtain between constituent concepts and between complex thoughts.  Work on knowledge representation in cognitive science[67] has thrown up numerous formalisms for extending this to capture the dynamic temporal relations that constitute actual theoretical and practical reasoning.  All of psychological activity will then be taken to consist in various manipulations of the relations between the basic concepts (central processes of reasoning and learning), or in establishing the connection between the central concept manipulating system and the sensory and effector systems of the organism (a connection achieved by the peripheral, perception and action modules; cf Fodor (1983)).

 

            By presupposing the possession by the organism of a set of basic conceptual contents, the theorist is presupposing the availability to the organism of a ready registered world (a task domain), consisting of that set of objects, properties and situations which are taken in the semantics to be the referents of the basic concepts and their complexes.  The connection between this system of basic concepts and the world is held to be achieved by peripheral modules which, apart from some parameter setting, are essentially innate.  Thus the cognitive starting point, according to conceptualist theorising, is already a point at which an objective (if, yet, simple) world is  available to the subject of cognition: Objectivity is presupposed in order that cognitive theorising can begin.  If there is a problem about how physical organisms can acquire capacities of registration which register the world objectively (unlike capacities based on pain-experience[68], say), it is taken to be not a psychological problem, but rather a problem for neuro-physiology or the theory of natural selection.

 

            By contrast, because the possession of CTC does not depend on the possession of the concepts by means of which the CTC is canonically specified, nonconceptualist theorising need not presuppose the organism's possession of basic concepts, and so need not presuppose the availability to the organism of a ready registered world.  Rather, nonconceptualist theorising (as we shall see) is theorising about those processes that give rise to the availability to the organism of an objective[69], registered world.  The problem of how basic concepts are acquired is not therefore cast aside for other disciplines to deal with, but is treated directly as the central problem for psychology.  As I explain in §(6), nonconceptualist theorising takes cognition to be the emergence of objectivity, not the inferential manipulation of a ready-made registration of objectivity.  One result of this is that the perception and action systems are not peripheral modules whose function is to establish a connection between surface impingements and a central reasoning system, but are the core of cognition.  High-level reasoning forms merely a peripheral structure built around the core.

 

            Another consequence of conceptualist theorising is that beneath the explanatory level of the basic concepts is nothing psychological; just an implementation[70] theory.  This is a theory of how the conceptual activities are instantiated within nonconceptual processes, eg. neuro-physiological processes.  The conceptual theory within cognitive psychology may be realised within a theory of computational procedural consequence as, within logic, the semantics (entailment relations) is realised within syntax (proof-theory).  The point is that, for conceptualist theorising, what lies beneath the basic concepts is explanatorily independent of the conceptual level, in the sense that one can understand fully what is happening at the conceptual level without, necessarily, understanding anything about what is happening beneath this level[71].  The slogan of conceptualist theorising is that what is beneath the concepts is beneath explanatory bedrock.

 

            By restricting attention to the conceptual structure of cognition, the conceptualist is forced to model all cognitive processes as processes of inference, either demonstrative or non-demonstrative (even learning is modelled as hypothesis formation, and hypothesis verification or falsification).  Conceptual structure is, indeed, precisely that structure which is required to model all of the logical inferences which are required for some analysis.  So psychology is held to be explanatorily dependent on logic.  What a concept is is fixed by its location in an inferential network, and its psychologically extrinsic connections to the world.  By contrast, nonconceptualist theorising exploits the nonconceptual structure within atomic concepts.  Once the shape of individual concepts has been correctly modelled, the possibilities of inferential combination will follow (and be explicable) as a tessellation of these elements.  (In the argument which follows, I shall focus on the systematicity (§ 7) of contents.  If we can show that it is possible to capture systematicity at the level of nonconceptual content, then we will have shown that it is possible to capture inferential connections at the level of nonconceptual content.[72]  What it would be to get right the shape of the elements so that the correct combinatorial tessellations follow is considered in §(7).  How one could achieve this is considered in §(8)).

 

            It is not correct that modelling the combinatorial structure of cognition entails modelling cognition with a representational system with a syntax and a semantics (Fodor and Pylyshyn 1988).  For, if nonconceptualist ideas are correct, it is possible to capture combinatorial structure by means of modelling the non-syntactic / non-semantic representational structure within concepts.  Instead of the nature of concepts being held to be explanatorily dependent on the nature of the inferential connections between concepts, the nature of the inferential connections will be explained as a consequence of the nonconceptual nature of the constituent concepts.

           

            In summary, the following elements are intrinsic to conceptualist theorising:

1) The presupposition of basic concepts;

2) The connection between concepts and the world is a “peripheral" connection which is largely innate;

3) The presupposition of objectivity: the cognitive starting point assumes a task domain which is available to the organism;

4) The problem of the acquisition of a capacity to register the world as the objective world is not a psychological problem, but a problem for neuro-physiology or the theory of natural selection.

5) The location of psychologically explanatory bedrock: The lowest level of psychological theorising is the manipulation of basic concepts.  Conceptual properties are merely implemented in nonconceptual properties.

6) Representational structure is intended, in the first instance, to be the structure of the inferential connections between concepts.  The nature of a concept is explained in terms of the nature of its inferential connections, rather than vice-versa.  The explanatory centrality of inference, rather than learning.[73]

 

              Each of these elements is a consequence of restricting psychological explanation exclusively to conceptual content and is, therefore, a consequence of the computational and psychological application of the S/S theory of representation, and hence a result of LOT.  Suppose we want psychology to explain how physical organisms come to acquire basic concepts; that we want a psychological explanation for the possession by organisms of the elements of thought (that we want — for whatever reasons — psychological explanation that does not have the consequences (1) - (6)).  Then we need an alternative to LOT.  Given our assumptions that psychological explanation must be based on some notion of content, and that an alternative to LOT must still be a computational theory, then we require a theory based on the psychological and computational use of nonconceptual content.[74]  But what could such psychological explanation be like?  How could there be an alternative to conceptualist psychological explanation? 

 

            Armed with a notion of nonconceptual content (CTC), how can we do justice to the conceptual characteristics of cognition?  If a cognitive life could be characterised only at the level of nonconceptual content, it would involve merely a very primitive registration of the world.  But even a primitive registration of the world is a primitive registration of the world only if it is possible to exhibit it as a simple form of, or constituent in, a sophisticated, or fully conceptual, registration of the world.  How can we do that?

 

(6) Nonconceptualist Psychological Explanation: The Emergence of Objectivity

            Conceptualist theorising presupposed a ready-registered world - a task domain - for its semantic theory.  Because CTC is specified in terms of the properties of the s-domain rather than the t-domain, psychological theorising based on CTC does not have to presuppose a ready-registered, fully objective, world available to the subject.  All that is presupposed are the basic, nonconceptual, organismal capacities of the s-domain.  Nonconceptual content, in isolation from background conceptual capacities, presents the world (of course), but not yet objectively: as the world which is available from any perspective, and about which one may be mistaken.[75]

 

6.1 Is the Realm of Reference Explanatorily Prior to the Cognitive Realm?

            Where the content of experience is exclusively conceptual content, some element of the objective world is given to the subject as (and only as) an element of the objective world (and thus as a task domain element).  For this reason, it is an adequate specification of an atomic conceptual content simply to use whatever linguistic item (name or predicate) stands in a primitive semantic relation to the objective element of the world which is the referent of the content.  (And molecular contents can be specified as logical constructs out of atomic contents).  Thus, if I perceive a traffic light as a traffic light (and not as, say, an elongated, colour-coded candy), then an adequate specification of the content of this aspect of my experience will be the use of the phrase “traffic light".  This is a referential specification of content.  It picks out traffic lights as task domain elements.[76]  If the content of experience was exclusively conceptual then it would be possible to take any aspect of the content of experience, assume that the aspect presented an objective element of the world as an objective (ie. task domain) element, and then name that element as the referent of the content.

 

            But it is not possible.  Consider a particular colour experience, divorced from a possible theoretical background[77].  Were we to attempt to specify the content of this experience referentially as an experience of a particular colour shade we would take our experience to be governed by the following two principles[78] (amongst others): Where a subject's colour experience of a surface A is not discriminably different from his or her colour experience of a surface B, and the subject's perceptual faculties are functioning normally and correctly, the colour of surface A is identical to the colour of surface B.  And, where a subject's colour experience of a surface A is discriminably different from his or her colour experience of a surface B, and the subject's perceptual faculties are functioning normally and correctly, the colour of surface A is different from the colour of surface B.  But since, notoriously, basic colour experience is a dimension along which non-discriminable difference is non-transitive, our attempt would quickly lead to contradiction, since we would obtain the result that the colour of A is identical to the colour of B, the colour of B is identical to the colour of C, but the colour of A differs from the colour of C.  Therefore, the colour of A is identical to, and is not identical to, the colour of B.  So basic colour experience cannot be specified referentially in the familiar conceptualist fashion.  There are no precise colour shades by means of reference to which we could specify the content of basic, observational, colour experience.[79]

 

            We have to be a little careful here, since there is (more or less) a concept *red* — we think of the world as objectively being coloured red — which can be a content of basic colour experience.  So we can specify a colour experience as being as of a red surface.  The question for the conceptualist, though, is whether we can presuppose an account of what it is for a surface to be coloured red, in order to characterise the content of basic colour experience in the familiar conceptualist fashion by means of a semantic relation of reference to the presumed item, a red shade.  But what the argument above shows is that there are no red shades that can play this role; insofar as there are red shades, they can't be task domain elements.  Hence the explanation of what it is for a surface to be red cannot be prior to the explanation of what it is to experience a surface as red.  (Thus, the theory of the content of our colour experience cannot presuppose a world of objective colour properties; it cannot presuppose its own task domain.  To specify a content by the concept *red* is to specify it in a conceptual, but not a conceptualist, fashion.  It will then be for the nonconceptualist to give the proper account of concept ascription). 

 

            Trying to get clear about the conclusion of this argument makes apparent an implicit consequence of conceptualist psychological theorising: by presupposing the psychological possession of basic concepts, the conceptualist is also presupposing a psychologically independent account of what it is for there to exist in the world an objective referent as a suitable denotation for a basic concept.  Perhaps it is for physics to tell us, but if so, psychology can play no explanatory role in this.  By presupposing the possession of basic concepts, the conceptualist is assuming that all that can be said, from a psychological perspective, about what is essential to basic concepts (apart from their interaction with other concepts) is that their content consists in a semantic relation to an objective item of the world.  So for a conceptualist, nothing can be said, by psychology, about what it is for the referents of basic concepts to exist.  For psychological explanation, we presume the world, and try to explain the nature of mind in terms of it; that is, there is an explanatory priority of world to mind.

 

            But it is just this explanatory priority which the argument from colour experience casts doubt on.  (And it is anyway highly dubious: do we really imagine that physics can tell us what chairs, or soccer matches, or university degrees, or crumpled shirts are, independently of a psychological account of what it is for organisms like us to recognise something as a chair, or ... ?).

 

            It would be equally implausible to adopt (with Berkeley) the reverse priority.  The more satisfying alternative is to suppose that the explanation of cognition and the explanation of the world are inter-dependent.  If we combine this idea with the idea that we should use a notion of nonconceptual content to provide a construction of our conceptual capacities, the result is a glimpse of nonconceptualist psychological explanation: the story of the capacity to think is the story of the nonconceptual emergence of objectivity.  For the nonconceptualist, the elements of psychological explanation do not depend on the applicablility of a mind/world distinction to the explananda “subjects" or organisms.  These nonconceptual elements are then employed in an account of how such a distinction comes to be applicable, and thus in an account of what it is for the explananda organisms to become subjects of experience and thought.

 

6.2 The  Emergence of a Mind/World Distinction

            Let me say a very little about what this means.  Our ordinary talk about cognition embeds a mind/world distinction; we characterise our cognitive states as being about a mind-independent world, and so we characterise cognition in terms of an external relation between two independently existing entities: mind and world.  I don't just mean that the world continues to exist even though nothing is perceiving it, or that there are more truths about the world than anyone knows, or that there are truths which we are incapable of recognising, but that there is the kind of gap between mind and world that allows minds to be wrong about the world, to refer to the world (rather than merely being immersed — like a paramoecium — in the world), and therefore to be able to think thoughts about the world.

 

            A manifestation of this is that we draw a sharp distinction between those predicates that can be appropriately combined with subject terms which refer to the world, and those predicates that can be appropriately combined with subject terms which refer to the mind.  Thus we can say of a ball, but not of an experience, that it is round.  We can say of a memory, but not of a football pitch, that it is veridical.  And where a single predicate can be used in both contexts, we insist that, strictly speaking, it has a different meaning in each context, or is literal in one context and metaphorical in the other.  Both memories and football pitches can be shrouded in mist, but ...

 

            One of the very few exceptions to this is the predicate “... is objective".  Both an experience and a shape property can be objective.  How can this predicate have a special status?  Suppose that the mind/world distinction is a phylogenetic or ontogenetic achievement: paramoecia don't manifest it, but we do; very young human infants don't manifest it, but adult humans do.  Natural selection has evolved creatures, by means of gradual and continuous genetic changes, which are capable of a mind/world distinction; and the processes of learning within the infant develop, through gradual and continuous neuro-physiological changes, cognitive capacities which present an independent world to an independent mind.  The question then arises, can we describe, explain and make sense of the pre-objective stages of development before a mind/world distinction is applicable, by means of a theory of processes defined over nonconceptual content?  Can we explain the transition from such a pre-objective stage where no concepts are possessed to an objective concept-exercising stage, by means of a theory of the computational manipulations of nonconceptual content?  Nonconceptualist psychological explanation is the attempt to do just this.  It is, therefore, the attempt to show how the mind/world distinction — objectivity — can emerge from a pre-objective stage at which there is merely an undifferentiated mind/world continuum.  The mind is embedded (Cussins (May 1987)) rather than solipsistic or formal (Fodor (1981)), in that a theory counts as a theory of the possession of a concept if, and only if, it is also a theory of what it is for the referent of the concept to exist.

 

            All of our descriptions and explanations are, of course, from our conceptual perspective, so we can describe the undifferentiated mind/world continuum either from the perspective of the mind, or from the perspective of the world.  Adopt, for a moment, the former perspective.  Then the emergence of objectivity is the transition from mere experience to experience of the world[80].  Take the latter perspective.  Then the emergence of objectivity is the transition from atoms swimming in the void, to ironing crumpled shirts and designing collaborative document processing office environments.

 

6.3 Claim-Setting[81]: Objections from Supervenience and Superstructure

            One might suppose that learning — the emergence of objectivity — is just a ladder to be kicked away once climbed.  In other words, the conceptualist might grant that we need a nonconceptualist explanation of phylogenetic and ontogenetic development, but insist that once developed, adult human cognition is adequately treated in the conceptualist fashion.  But this would be to get my argument back to front.  I have not argued that we need to give a nonconceptualist account of learning and therefore we need to give a nonconceptualist account of adult cognition.  Rather, I have discerned a psychological necessity to recognise nonconceptual content within adult, human cognition, and then seen that the notion so introduced is appropriate for explanation of learning.

 

            There are two kinds of significance that the notion of learning can have, according to whether the notion is embedded in a conceptualist or nonconceptualist theory.  If the former, then learning is just a ladder to be kicked away, because what is reached by climbing the ladder is something whose nature is explained by a theory (the theory of inference) which is independent of the theory of learning.  But if the latter, then learning is essential to (mature) cognition because the nature of what is learned is not explanatorily independent of the mechanisms of learning.  For the nonconceptualist, learning is the emergence of objectivity and cognition (without learning) is the maintenance of objectivity.  The explanatory notions in terms of which the maintenance of objectivity is to be explained are the same as those employed in the theory of learning.  It would be illegitimate to assume the significance that the conceptualist attaches to the notion of learning, and then argue that C3 must be mistaken because it supposes that learning is essential to cognition.

 

            My claim is that we are — as adult humans — still awash in a partially differentiated, partially objective, mind/world continuum, at which pain experience lies at one end, various sorts of emotional experience a little further in, then colour experience, then, perhaps, shape experience and the experience of democratic justice.  Our concept possession is, at all moments of our cognitive lives, a matter of staying afloat (shape experience) or only half drowning (the paradoxes of colour experience), or being almost completely submerged (pain experience).  The exercise of a concept is the result, both literally and metaphorically, of our ability to find our way in the environment; to stay afloat.  Our nonconceptual capacities sustain our conceptual capacity; they are not merely part of a transition phase

 

            The conceptualist's objection might take two forms.  First, he might suppose that there is a counter-example to C3 in the form of an organism which, by cosmic coincidence, comes into existence de novo as a fully formed adult human, perhaps molecule for molecule type-identical with a human who has come into existence in the ordinary way.  Wouldn't C3 be committed to denying that such a creature possessed concepts, and wouldn't such a denial be incompatible with supervenience?[82]  But, in asserting the centrality of learning, C3 does not hold that a person who came into existence de novo would not be a concept-possessor (as the objector supposed).  Only that the explanation of the conceptual capacities which, we agree, he would possess, would be by means of a theory of learning.  My claim is a claim about the direction of explanation, not about the ontology of supervenience.  As is the conceptualist's who could grant the theoretical possibility of a concept-possessing system which had never engaged in inference, while also holding that the explanation of the possession of concepts would be by means of a theory of inference.  So a nonconceptualist can claim that the explanation of what it is to possess concepts is to be given in terms of a theory of learning, while granting the possibility of a concept-possessing system that has never engaged in learning.  Though in both cases, as soon as the concepts come to be exercised, the proprietary processes (inference in the former case, learning in the latter) will also be exercised.

 

            A second form that the conceptualist's objection might take has to do with cognitive superstructure, such as those cognitive capacities which involve language.  The objector is willing to grant that linguistic capacities are grounded in nonconceptual capacities, but nevertheless insists that, once grounded, they build up on their own, yielding the higher aspects of cognition for which LOT is appropriate[83].  In the spirit of claim-setting, I note here the form of the C3 response.

 

            We must be careful to keep in mind the distinction between vehicle and content.  Linguistic and linguistically-infected cognition is crucial to the scientific psychology of human cognition, but we should not argue from the central role of linguistic vehicles in human cognition to a LOT-style theory which supposes that conceptual content, traditionally associated with language by the S/S theory, is basic to the scientific psychology of human cognition.  Indeed, I have argued that linguistic vehicles often express nonconceptual content.  The nonconceptualist, recognising the important role of linguistic vehicles, maintains that it is the nonconceptual content of these vehicles, not their conceptual content, which is psychologically potent.  Nonconceptual content doesn't so much form the foundation of cognition, as provide the building blocks out of which the cognitive superstructure is constructed.  What is of concern is whether S/S theory is required for scientific psychology to respond to the problem of embodied cognition.  The conceptualist's claim that it is is not only the claim that language is the psychologically basic form of representational vehicle, but also the claim that S/S theory is the right theory of language.  A proper recognition of the role of language in cognition entails neither of these claims.

 

            Words do, often, express concepts, and that they do so is of great significance for our cognitive life.  This is not denied by C3 which aims to show first, how the S/S theory of this phenomenon is unworkable because it cannot capture, in a theoretically adequate way, the cognitive significance of indexical and demonstrative contents and, therefore, is ill-placed to yield adequate theories of learning, perception, memory and action (all of which are essentially indexical).  And secondly, and ultimately more importantly, that there is a way of treating the conceptual phenomena of language which does not rest on S/S theory, but which is naturalistically acceptable.  We need to identify the importance of concepts (§7) and then show how a computational theory of nonconceptual content can capture this importance (§8&9).[84]

 

            There are colour concepts, but because of the threat of paradox and problems of cognitive significance, basic colour cognition cannot be explained in the conceptualist fashion.  The nonconceptualist supposes that all of our cognitive life is like basic colour experience, only a bit better or a bit worse; we can usually treat it conceptually but never conceptualistically.  Conceptual content is a valuable idealisation, rather than the basis of psychological explanation.  Obviously, it is more of an idealisation with respect to colour cognition than with respect to number cognition, but the broad base of our cognition is not too dissimilar from the case of colour: The argument from the non-transitivity of non-discriminable difference of colour-of-surface experiences can be extended to cover any vague concept for which the Sorites paradox obtains: *bald*, *heap*, many shape concepts (eg. where I have a concept of a 25 sided figure, but not as a 25 sided figure), the concept of the letter “A" (which cannot be specified geometrically or topologically), the concept of the phoneme [p] (which cannot be specified acoustically), ethically and aesthetically evaluative concepts, the concept of democracy, my concept of macaroni ...   (If there are exceptions to this, they are mathematical or set-theoretical concepts.)

 

            Conceptual contents are the availability in experience of (part of) a task domain.  CTC contents are the availability in experience of substrate domain abilities.  Nonconceptual contents, we saw, cannot be explained by means of t-domain specifications, so nonconceptual contents cannot be explained in the way distinctive of conceptual contents.  Nonconceptualist psychological explanation depends on the converse possibility: that conceptual content can be explained in the way distinctive of nonconceptual content.  The exercise of concepts—which we know to consist in the availability in experience of a t-domain—turns out to be a special case of the availability in experience of s-domain abilities.  For the nonconceptualist, the notion of experiential availability of s-domain abilities is a generalisation of the notion of the experiential availability of a t-domain.  Scientific psychology should, therefore, dispense with the less general notion, and model cognition entirely in terms of the theoretical apparatus distinctive of CTC content.  But we will then need to explain the conditions under which the experiential availability of an s-domain constitutes the experiential availability of a t-domain.            

 

(7) Objectivity Constraints

            If nonconceptual content is to play the role in nonconceptualist psychological explanation that I have indicated — as the base for a progressive construction of objectivity — then we need to understand why certain nonconceptual cognitive states are not properly interpreted as presenting the objective world to a subject, and why certain other, more sophisticated, nonconceptual states do count as providing objective registrational capacities.

 

            The conceptualist need say very little about objectivity[85] since an objective relation between mind and world is a presupposition of his psychological theorising.  But the nonconceptualist cannot afford this luxury; because, for him, psychological explanation is the explanation of the transition from a pre-objective state to an objective state.  We need to understand how there can be a principled, if not sharp, distinction between creatures like paramoecia and creatures like us, between infantile and adult cognition, perhaps also between normal and demented cognition.  With a fair grasp of the principle of this distinction, we can then address, in sections (8) & (9), how best to model the computational processes that can transform a creature from lieing on one side of the distinction to lieing on the other side. 

 

7.1  Some Intuitions about Objectivity

            Our notion of the world is a notion of what is independent of an organism's idiosyncratic relations to the world, because it is a notion of what is common for all.  If the idiosyncratic relations are specified informationally, then our notion of an element of the world is a notion of something which is independent of any particular informational relation to it.  If the idiosyncratic relations are specified experientially, then our notion of an element of the world is a notion of something which is independent of any particular subjective experience of it.  Different kinds of representational system may gain very different perspectives on the world, but we can talk of “perspective" only because there is a common focus; a common something which may be glimpsed in very different ways, a common basis for agreement and disagreement.  A theory of objectivity is a metaphysical theory of what it is for there to be a world in this sense.  It is a theory of what it is for the significance of an organism's particular relations to the world to go beyond what is particular in those relations, to go beyond the particular energy configuration, or the particular configuration of sense-data.  If all there is to being in the relational state is having that energy configuration, then being in the relational state cannot “present" anything which goes beyond the particular relation itself.  So it cannot present something which is common for all.  A theory of objectivity is, thus, a theory of the “cognitive" separation between what is particular to an organism's relations to the world and the world itself; it is a theory of what it is for there to be a distinction between the cognitive processes of an individual and the common world.

 

            One way to investigate what objectivity is and what it is to register the world objectively is to ask why we might think that this cognitive separation characterises human relations to the world but not those of paramoecia or frogs.  What are the criteria for the distinction between thermostats, voltmeters, paramoecia and frogs, on the one hand, and cognizing persons on the other?  Can we give a theoretically grounded distinction between conceptual response to the commonsense world and transducer response to nomic properties[86]?

 

7.2 A Coherence Test for Objectivity: The Case of Frogs and Automatic Screwdrivers

            I propose a test for this distinction which is based on the insight that the explanation of mind and world is inter-dependent.  The test, in essence, is this: take the capacities of the functioning system as nonconceptually described, and attempt to interpret these capacities as conceptual.  The attempt will be successful, if and only if, the (putative) world which would be presented by means of the (putative) conceptual capacities is a coherent world.  The distinction between concept-exercising and non-concept-exercising organisms is made to rest on a metaphysical theory of coherence conditions for the world, or what I call the objectivity constraints.[87]

 

            Consider a simple example: Is it correct to attribute concepts (and therefore a capacity to register the world objectively) to an automatic screwdriver equipped with some simple sensory apparatus which detects, as we loosely say, whether a screw is present, and if so whether it is screwed in or unscrewed.  Must this talk of detecting screws be interpreted instrumentalistically[88] (Dennett (1987)), or is it of essentially the same kind of talk as what we take to be the realist attribution of conceptually-based perceptual mechanisms to ourselves?  In applying the test we would ask first, what would a world be like which was presented to a system with these nonconceptually characterised capacities?  (Would it meet the constraints on objectivity?)  We would attempt to interpret the capacities as conceptual, and would answer that it would have to be a world which just contained screws, and two properties, screwed or unscrewed.  But such a world is not coherent: screws can only be part of a world in which there are factories which make them, properties of rigidity, length, and weight which they have, a location where they are, a direction in which they are screwed, ...  So the automatic screw-driver does not count as possessing any concepts at all.

 

            This is what shows that talk of concepts of screws in connection with the screwdriver is extrinsic talk, in virtue of the artifact's functional location within the human world.  We characterise instrumentalistically the abilities of the screwdriver in terms of elements of the human world, because we can happily presuppose the human world for the purposes of designing and evaluating the artifact.  It is only because we can make this presupposition that we can talk of the artifact “detecting screws".  So understanding the abilities of the screwdriver does not help at all with understanding how it can be that, for certain physical systems, a world is conceptually available to them. 

 

            Similar points apply to our descriptions of, say, frogs' behavior, where our talk of frogs' detection of flies again depends on the extrinsic attribution of concepts, an attribution which we undertake in order to understand the evolutionary “design" of the frog.  Again we presuppose the human world because if we do so we get fairly good predictions of the frogs' behaviour, a fairly good understanding of the success of the frog design, without becoming submerged in the physiological details or the pure information-theoretic account.  There is no intrinsic and non-instrumentalistic attribution of concepts to frogs because any attempt to interpret the nonconceptually characterised abilities of frogs as conceptually presenting a world yields an incoherent world.  Nothing could[89] be a fly if it didn't have a size, but the success of the frogs' detection system depends on not discriminating flies in the near-distance from massive objects in the far distance.  Nothing could be a fly if it couldn't be stationary, but the success of the frogs' detection system depends on the movement of flies.  A frog's notion of a fly is a notion of an always-mobile something which has no size.  So it is not a notion of a fly.  So frogs don't have fly - concepts.

 

7.3 The Holism and Generality Constraints

            These examples motivate a holism constraint on objectivity: nothing could count as a concept of an object or property unless it was a part of a complex, holistic web of concepts for the reason that nothing could be such an object or property unless it was a part of that complex, holistic web of objects and properties that is the referent of the conceptual system (and conversely).

 

            The holism constraint adds power to Evans's generality constraint: An organism does not possess a concept *a* of an object unless it can think *a is F*, *a is G*, and so on for all of the concepts *F*, *G*, ... of properties which it possesses (and which are not semantically anomalous in combination with *a*).  And, similarly, an organism does not possess a concept *F* of a property unless it can think *a is F*, *b is F*, and so on for all of the concepts *a*, *b*, ... of objects  which it possesses.  Thought is essentially structured[90] because the world is essentially structured, and conversely.

 

            The generality constraint, in association with the holism constraint, which are grounded in the metaphysics of objectivity, are of enormous value for providing a principled basis for a separation of concept-exercising systems, for which there is a cognition/world distinction, from systems which cannot exercise concepts, for which there is no such distinction.  For example, consider the mechanism of auditory localisation in humans.  The information processing mechanism carries out various computations which depend on the speed of sound and the distance between the two ears.  We may speak of the system representing these quantities.  How similar is this notion of representation of distance and speed to the representations which I employ when I plan to eat my lunch at Stanford?  Are we justified in attributing concepts to the auditory mechanism, as we are justified in attributing concepts to the whole person?

 

            By the generality constraint, the localisation mechanism could only think *this sound is travelling at x m/s* (where x m/s is the speed of sound) and of thinking *the distance between the ears is y m* if it was also capable of thinking *this sound is travelling at y m/s* and *the distance between the ears is x m*.  But there is no warrant of the kind that there is for attributing the content, *this sound is travelling at x m/s*, for attributing the content *this sound is travelling at y m/s*, or the content *the distance between the ears is x m*.  But, by the generality constraint, nothing could be a warrant for the capacity to think *this sound is travelling at x m/s* or *the distance between the ears is y m*, unless it was also a warrant for the capacity to think *this sound is travelling at y m/s* and *the distance between the ears is x m*.  Hence there was no proper warrant for the concepts *x m/s* and *y m*.  So, although there may be reasons for attributing representations, in the broad sense of “representation", to the mechanism of auditory localisation, it is not correct to attribute concepts to this mechanism.  And, similarly, by the holism constraint, it makes no sense to suppose that a system possesses the concept of some number n, if it is not also capable of thinking of (n-1).  And again, ...

 

            Evans (1982) employs the generality constraint to show that possession of an information link[91] characteristic of a perceptual nonconceptual content is insufficient for possession of a concept.  If my registration of my coffee cup consisted solely in the perceptual information link currently between me and the cup, then although we could imagine attributing to me the thought *that cup is grey* (cf. *that fly is to my right*) on the ground that my information link delivers information about the colour of the cup, there would be no ground for attributing to me the thought *that cup was manufactured in Stoke*, or *that cup won't move when the lights are out*, or *that cup will be smashed tomorrow* even though, we are to suppose, I have the concepts of these properties.  The reason for this is that if my capacity to register the cup is exhausted by my capacity to discriminate colour and shape properties of the cup (on the basis of the information link), then I cannot register the cup as the kind of thing that exists in the dark, or was manufactured at some other location at some other time.  Moreover, if my demonstrative thought about the cup consisted solely in my information link with it, there could be no basis for my ever being in error about the cup.  A notion of cognition grounded solely in information links with the world would be a notion of perfect cognition of the world, and would therefore be a notion which provided for no distinction between cognition and the world.  (The separation between me and the sensory deliverances from the cup would be no greater than, and no different from, the separation between me and the deliverances of my retina.  But this latter separation is not a cognitive separation: there is no sense in which my cognition is about my retina.)

 

            These objectivity constraints provide a metaphysically grounded way to assess physical systems for the possession of concepts, as the examples of the automatic screwdriver, the frog, the mechanism of auditory localisation and the information link with the cup demonstrate.  But they also function as success criteria (a target) for the nonconceptualist psychological enterprise.  Nonconceptualist explanation is explanation of how it is possible for physical systems to make the transition from being in states which are characterisable solely by means of specifications appropriate to nonconceptual content, to being in states which approximately satisfy the objectivity constraints, and which therefore are also specifiable in the way which is appropriate to conceptual content.  And being in states which are specifiable in the way which is appropriate to conceptual content is to be such as to satisfy certain logical norms: for example that these states, or complexes formed from them, can enter into correct patterns of inferential connection, can be truth-value assessable, and so be the bearers of genuine, full-blooded intentionality.  To show how to build nonconceptually characterised structures which approximately satisfy the objectivity constraints is to show how objectivity can emerge within the physical world.

 

(8) Perspective-Dependence

            With a principled basis for objectivity now in play, we are in a position to explain why many types of CTC (§4) do not present the world to a subject as the objective world. And we are also in a position to understand what transformations must be applied to these types of CTC in order to yield types of CTC which do present the world to a subject as the objective world.

 

8.1 Perspective-dependent Abilities v Perspective-independent Abilities

            Conceptual contents are the availability in experience of a t-domain, and CTC contents are the availability in experience of an s-domain (§4).  Nonceptualist psychological explanation depends on showing that there is a spectrum of CTC contents, at one end of which the s-domain experiential availability entails t-domain experiential availability in virtue of the approximate satisfaction of the objectivity constraints.  I want to suggest that this spectrum can be ordered as a dimension according to the degree of perspective-dependence of the s-domain abilities which are experientially available.  The degree of perspective-dependence of the abilities by reference to which a CTC content is canonically specified is the degree to which the content fails the objectivity constraints.  Conversely, the achievement of perspective-independence in these abilities is what is required for approximate satisfaction of the objectivity constraints, and therefore what is required in order for the content to present a t-domain.  Therefore, the theory of the whole range of contents specified by means of abilities which are perspective-dependent to differing degrees is a generalisation of the theory of conceptual content which concerns itself with only one end of this range.

 

            One kind of perspective-dependent ability is the ability to find one's way through a city where the ability depends on starting at a certain location within the city and going to a particular location elsewhere in the city.  The starting location will be identified by its appearance, and then the ability to follow the route from start to finish will depend on recognising each landmark along the route partly in terms of its appearance, partly in terms of the fact of its being the nth landmark in the series of landmarks beginning with the starting location.  Associated with each landmark will be an orienteering instruction: “turn right and carry straight on", and so forth.  The ability will not result in satisfactory performance if the project requires not starting at the starting location, or not going from the start to the finish by the particular route, or going to a different finish location, or if the appearance or relative location of any of the landmarks alters.  In all of these ways, the ability is heavily dependent on the perspective on the domain that the system must adopt.  Certain points of view within the domain (from the starting location and from the landmarks) are privileged, in that the ability to find one's way through a city is dependent on occupying these particular, privileged points of view (perhaps in a certain order).  The notion of point of view is an experiential notion.  A perspective-dependent ability to find one's way around is an ability which depends upon the system's having certain particular kinds of experience: those it will enjoy from the privileged points of view. 

 

            A perspective-independent ability to find one's way through a city is such that were one to emerge within the city from a manhole, then wherever one emerged one would be able, barring external obstacles, to find one's way to any other point within the city.  The ability to find one's way does not depend on a particular perspective (from the starting location) or a particular set of perspectives (a route).  There are no privileged points of view, so the perspective-independent ability to find one's way through the city is an ability which yields satisfactory perfomance whatever one's point of view within the city.  The ability is dependent on experience of course, but it is not dependent on the system's having particular kinds of experience (a landmark-1 type experience followed by a landmark-2 type experience, etc.)  Performance is maintained whatever the experiential perspectives which the system happens to enjoy.

 

            The general notion, here, is that of experience-based ways of finding one's way through a domain which depend, more or less, on having particular kinds of experience: those gained from privileged points of view.  The literally spatial case provides the most direct example, but the contrast between perspective-dependent ways of finding one's way through a domain and perspective-independent ways of finding one's way through a domain can be drawn however abstract the domain.  Recognition of properties, for example, may depend on the context in which the property is satisfied, whether the property is that of coffee, electromagnetism, or “freedom fighter".

 

8.2 From Perspective-dependent Abilities to Perspective-dependent Contents

            Given a contrast between perspective-dependence and perspective-independence for abilities, we may apply the contrast to nonconceptual contents which are canonically specified by reference to these abilities.  Thus, a perspective - dependent content is a content which is canonically specified by reference to perspective-dependent abilities.   In virtue of the intrinsic nature of the content[92], it can be entertained only from a particular perspective, or restricted set of perspectives.

 

            This is a very strict notion of perspective-dependence of content, because it is a notion which infects the content itself, rather than merely the external conditions under which the content may be grasped, or the conditions under which it has been learned.  Thus, my ability to register my mother is an ability that depends in many ways on having the perspective of a son.  Yet I still register her as someone to whom many other kinds of relations are possible; this sort of perspective-dependence of a registrational ability (which is quite independent of the theory of the specification of b contents by means of concepts of the s-domain) does not infect the content of the ability.  My registration of an elm tree is a registration from a non-botanical perspective (indeed, I can only grasp, in my present state, the content *elm tree* from a non-botanical perspective), yet it is a registration of the referent as the kind of thing which is also available to a botanical perspective.[93]  I can only grasp the content *grass is green* if I am not asleep, or at least not dreaming.  But the nature of the content itself (what I grasp) is quite unaffected by this content-extrinsic perspective-dependence.

 

            Thus, perspective-dependent contents, which are introduced as contents which are canonically specified within the theory of content by means of perspective-dependent s-domain abilities, involve a quite different and much more radical notion of perspective-dependence than the usual one which is concerned only with dependence on conditions under which a content may be grasped.  The less radical notion does not help at all with the explanation of why the content of certain states fails the objectivity constraints.  But the more radical notion is the basis of such explanation. 

 

            It may help to consider an extreme example.  Pain contents are specified in terms of the capacity to be in pain.  But this capacity is heavily perspective-dependent: As a way of negotiating the world, it depends on having only one particular kind of experience: experiencing pain or remembering experiencing pain.[94]  The capacity depends on a unique, privileged point of view.  This perspective-dependence does infect the content; pain experience does not present the world as being a way which is independent of how it is experienced; as being a certain way which is available to concept-exercising creatures who can't experience pain. By contrast, perspective-independent contents present the world as being potentially available to the perspective of any concept-exercising creature.[95]  They do so because they function successfully whatever the particular kinds of experience which, on an occasion, a creature employing such contents uses to negotiate its way through the domain.

 

8.3 Perspective-(in)dependence, the Objectivity Constraints and Task Domains

            It is by approximately being perspective-independent that certain contents approximately satisfy the generality constraint.  Contents which present my coffee cup to me present it to me as being the kind of thing which could, for example, be available to the experience of a cup-manufacturer in Stoke.  So I am able to entertain the thought that my coffee cup was manufactured in Stoke.

 

            Where a content fails to meet the generality constraint, the content is not fully structured, so that it cannot, for example, enter into a full range of inferences.  We might modify the usual notation for conceptual content by representing the loss of structure by hyphenation.  So, for an unlikely example, I might have a registration of Jones which is dependent upon adopting an “eating perspective" with respect to Jones (as the frog's registration of a fly is not a fully spatial registration, but rather a “direction and motion - dependent registration").  If it were correct to interpret my content as a concept, then I could think *Jones is eating*.  But since my Jones-recognising capacity, in terms of which my Jones-contents are canonically specified, is dependent on an eating perspective, I could not entertain the thought *Jones has his mouth taped shut*, or *Jones used to be a zygote back in Manchester*, even though I had the concepts *... has his mouth taped shut*, and *... used to be a zygote back in Manchester*.  Therefore my content fails the generality constraint in virtue of being perspective-dependent; it is thus unstructured and should be represented as *Jones-is-eating*.

 

            Similarly in the case where my ability to think about a distant location in a city as *there* is canonically specified in terms of a route-dependent capacity to find my way around the city.  Being here, I can think *there*, lobbing a thought to one of my accessible goal-locations.  But my capacity to entertain thoughts about “there" depends on my adopting the perspective of “here" (for a particular set of locations), and thus is perspective-dependent and so fails the generality constraint.  My content is really *here-to-there*.

 

            Because certain contents are perspective-dependent, they do not present their objects as being the kind of thing which could be available to any contentful perspective; hence, they do not satisfy the generality constraint.  If we attempt to specify their content in a conceptualist fashion, then we shall fail because the content fails the generality constraint, and therefore cannot be canonically specified by means of a semantic relation to an objective item of the world (the generality constraint was a constraint on objectivity).  Perspective-dependence entails failure of the objectivity constraints entails failure of canonical t-domain specification.  It is because certain nonconceptual contents are perspective-dependent that they do not present the world as the objective world; and it is because certain nonconceptual contents are very little infected by perspective-dependence that they count successfully as presenting the world as the objective world.  Perspective-independence entails satisfaction of the objectivity constraints entails canonical t-domain specification.  This suggests what kind of psycho-computational modelling is required to yield nonconceptualist explanation of conceptual capacities: psycho-computational transformations defined over nonconceptual contents which have the effect of reducing the perspective-dependence of the contents.  I will turn in a moment to how we might achieve this effect.

 

8.4 Task-domain Independence and the Centrality of Learning 

            The general case of perspective-dependence can be easily stated: where the psychological success of a whole system or ability (rather than the evaluation of a particular exercise of an ability at a particular time) depends on being evaluated with respect to[96] a task-domain (in the sense defined in §4.311), the content which is canonically specified in terms of these abilities is perspective-dependent and so cannot ground the possession of concepts.  The frogs' “fly-thoughts" are not really fly thoughts because their success (and hence their content) depends on special features of the frog task domain (the cost of tongue-swipes at massive distant objects is outweighed by the benefit of successful fly catches); frog “cognition" is dependent on the perspective of a particular task domain.  It cannot generalise.  My “Jones cognition" depends on an eating task domain, and my ability to find my way around Palo Alto depends on Palo Alto being a route-structured task domain.  (Redesign a few buildings at a major route intersection and my entire capacity could be wiped out.[97])

 

            The way to make a system with contents which are doomed to perspective dependence is to make it so that its success is task-domain dependent.  So, conversely, in order to have a chance of building a system with perspective - independent contents, one must build it so that its success does not depend on the contingencies of some task domain.  Ironically, for a cognitive system to be an objective system whose contentful states are canonically specifiable by reference to a t-domain, the capacities of the whole system—as nonconceptually characterised—must be t-domain independent.  It must operate at a level at which its abilities can modify themselves so as to transfer between t-domains; that is, at the level of those learning processes that give rise to the registration of task domains.  Hence this nonconceptualist theorising[98] puts learning at the core of cognition, rather than at a peripheral transition stage through which we must pass in order to reach cognition proper.   

 

            In summary, contents are nonconceptual in virtue of being perspective-dependent[99], so we can represent the transition from a nonconceptual content to a conceptual content as being the transition from perspective-dependence to perspective-independence.  I need a registration of Jones which is available from any perspective that I could adopt towards him.  For example, I need to be able to recognise him, or know what it would be to recognise him, not just when eating, but when swimming, reading, with his mouth taped shut, and as a zygote back then in Manchester.  Thus my ability to recognise Jones has to be independent of any particular task domain.  Similarly, I need to have a registration of Palo Alto which is not route-dependent, so that I could, for example, pop up out of a man-hole anywhere in the city and find my way to any other point in the city which might be my goal location.  This is not a registration of Palo Alto which is from no point of view (whatever that might be), but a registration which does not depend on one's occupying some particular place within Palo Alto (or set of places, eg, a route).  My registration of Palo Alto is a view from anywhere in that I can find my way wherever I pop up (so with no prior route).  Escape from perspective-dependence, and, hence, the achievement of objectivity, is gained not by somehow cutting one's cognitive life free of perspective altogether (A God's-eye view), but by making it a better and better approximation to being such as to work whatever perspective one adopts.  Cognitive life couldn't be cut free from perspective, but it must be cut free from task-domains.

 

(9) Computational Vehicles and Cognitive Maps

            So far I have considered whole-system contents which are specified in terms of abilities of the whole system.   The representational vehicles for these contents are whole-system abilities.  I have indicated the form of a nonconceptual analysis of these contents. I have shown what must be achieved in order for this analysis to yield approximate satisfaction of the objectivity constraints, and thereby for the representational vehicles to carry conceptual contents, with their constitutive inferential roles.  This has all been at the level of psychological explanation (figure 2).  Finally, we must go sub-personal and examine the computational vehicles for these contents, in terms of which they will be modelled by cognitive science.

 

9.1 Map-Making and Map-Use     

            In order to pursue the C3 programme, we need to look at computational ways to reduce the perspective-dependence of the content of the representational states of the system.  A good research strategy for this is to examine the transformations of external, communicative representations which have this effect.  The importance of such consideration from the point of view of C3 lies not in the communicative representations which are constructed (maps or marks on a white-board), but in the process of representation construction and representation use.  It will be analogues of these processes that a C3 model will need to implement computationally.

 

            Consider how an ordinary map works.  We may imagine that the map is constructed by a person walking around a territory, observing how the territory appears to him from each of his perspectives.  The map-maker walks through the territory and thereby has an egocentric registration of the layout; that is, a registration of where things are which is in terms of how they are related spatially to the map-maker.  This is a perspective-dependent ability, because were a map to reflect only this knowledge, it would yield a registration of the territory which depends on following one particular route through the territory: the route which the map-maker followed.  And it would therefore be of no use to anyone who wished to follow a different route through the territory.  We may say that such a “map" represents only egocentric space, and not objective space; just as an organism which has accessible only nonconceptual contents can register only egocentrically, not objectively of the world.

 

            Notice that at these early stages, there is no independence between the map-user's registration of where he is and the map-user's registration of what properties there are at the location.  Places are identified simply by how they appear, or by their proximity to landmarks, so if one finds oneself at a place where there is no wood, then one could not be at a location marked on the simple graphical representation as having a wood.  Such a simple map cannot be in error about what it is like at the places it represents, since the way in which it represents the locations which make up the space (the location contents) is wholly dependent on how it represents what it is like at those places.  If one finds oneself at a place in the objective world where there is no wood then one is not at the location marked on the graphical representation as having a wood.  The cognitive analogue of this simple spatial representation would be such as to yield a registration of a place about which one can only think that it has the properties that it appears to have.  One could not think, for example, *they have chopped down the wood at this place*.  There is no distinction between how things appear in one's cognition to be, and how they are.

 

            With time, the map-maker would follow many different routes through the territory, and so would come to be able to represent a multiple route-based registration of the region.  A preliminary graphical representation of this knowledge would represent particular routes, and features by means of which one could identify where on the route, or on which route, one was.  (Many tourist maps are like this).  Were someone else to use such a map they would have a route-dependent knowledge of the territory.  The more complex the map becomes, the more it becomes possible to identify a location by knowledge of where one has been and how one has travelled, rather than simply in terms of what properties the map represents as being true of the location.  For the cognitive analogue, there is the beginnings of a distinction between how things appear and how they are; the beginnings of the possibility of error.  The cognitive map cannot yet ground false judgements about represented locations, but other, less sophisticated norms can apply: the map can be misleading, for example.[100] 

 

            As he follows a number of routes through the space, and by using his tracking skills, he is able to determine how certain regions appear from different perspectives, how different perspectives are related to each other, and so he is able to start drawing what we think of as a (topological) map of the space.  Such a map captures the space of the territory so that each place is represented in the same way as every other place[101] (rather than in terms of its relation to where the map-maker is, at stage one; or in terms of its relation to other places on the route, at stage 2).  No place, or set of places, has a privileged representation.  Once in possession of the map, one's registration of the space is not limited to a sense of how the territory looks when following certain routes through it, but rather is a registration of the space from no particular point of view (a view from anywhere); a registration which has utility wherever one finds oneself in the space.  At this stage the identification of places is no more dependent on the identification of properties, than the identification of properties is dependent on the identification of places.  Because of the holistic character of the representation in the map, it would be possible for the local quarry to remove a hillside, and one still identify where one was.  Hence the map user could tell that the map was now wrong about the hillside.  A subject with a fully-fledged cognitive map could therefore form true judgements about the location.  He or she could think *there used to be a hillside here*, because the way of thinking of the location is no longer exhausted by being in receipt of information about how the place now looks.

 

            In terms of the map-user, this story identifies a sequence of progressively less perspective-dependent abilities to find his way around, and in terms of the map-maker, it identifies a progressive construction of abilities that are less and less perspective-dependent.  Moreover, the sequence appears to be of just the sort that C3 requires, because as a more sophisticated map is constructed, the kinds of error that are possible and therefore the level of objectivity becomes more sophisticated.  C3's task, then, is to show how to provide a computational analogue of the map-making and map-using story, for each cognitive domain.

 

            The attainment of cognitive perspective-independence may be thought of as the computational construction of cognitive map-making and using abilities (O'Keefe and Nadel 1978).  In the limit, possession of a cognitive map would entail that every place was thought about in the same way as every other place.  Of course, no actual cognitive map could fully achieve this aim, since the map will always be bounded.  But that is as it should be, because the objectivity constraints are idealisations to which, in principle, nothing could absolutely approximate.  The point, though, is that the construction of a cognitive map on the basis of egocentric, perspective-dependent abilities is the right way to achieve a relative approximation to the objectivity constraints.  If a subject can think of place A in the same way as he or she thinks of place B, then the ability to entertain any thought about place A (*A is F*) will entail the ability to think that thought about place B (*B is F*).

 

            It may have worried the reader that the map-making story I have told was only in terms of an atemporal registration of locations.  I helped myself to the registration of properties, and to the registration of time.  But it does not seem too far-fetched to suppose that one could tell analogous stories for these too.  We talk of the ways we have of locating ourselves in time, so the idea of a map of “temporal space" seems quite natural.  Likewise with properties: it seems natural to tell a map-making story of how our discriminatory capacities can come to be less and less perspective-dependent.  There is a great need for lots and lots of detail here, but my purpose in this paper has been to see how C3 can provide a possible alternative to LOT, not to predict the results of years of detailed empirical research in a cognitive science informed by C3.

 

 

9.2 Connectionism: an appropriate computational architecture for C3?

            Can connectionism[102] provide a computational architecture which is appropriate for modelling cognition according to the C3 framework?

 

            Paul Smolensky (1988) has held that the Proper Treatment of Connectionism (PTC)—in order to provide an alternative way of modelling cognition—requires that connectionism model “sub-conceptually" at a level between the “symbolic" and the neural.  If he is right, then given the discussion in this paper, it follows immediately that PTC must not aim to give conceptualist psychological explanation (§5).  The conceptualist does not recognise any representational level beneath the level of atomic concepts; any level of explanation beneath the symbolic conceptual level is implementation theory.  But PTC must open up a cognitive explanatory gap between the conceptual level and any level of implementation theory.  So PTC cannot be conceptualist.

 

            Tracing this consequence down through the hierarchy of figure 2 (§1), it follows that PTC needs to make use of a notion of content which is different from a notion of conceptual content, and hence use a representational theory different from the S/S representational theory (§4).  But we also saw (§3) that LOT was able to account for how computational psychology can respond to the problem of embodied cognition only if it employs S/S theory.

 

            So PTC appears to be  caught in a bind: on the basis of the only account we have, PTC must employ S/S theory in order to respond to the problem of embodied cognition.  But, to provide the cognitive alternative that it aspires to, it cannot aim to give conceptualist psychological explanation.  It can only find some alternative insofar as it doesn't employ conceptual content, and hence insofar as it doesn't employ S/S theory.  Just as Fodor and Pylyshyn (1988) thought, S/S theory is PTC's nemesis.

 

            But I hope to have shown (§4, 6, 7, 8 & 9.1) how the CTC notion of nonconceptual content makes possible an alternative to S/S theory which can nevertheless speak to the problem of embodied cognition.  C3 can do this because (1) CTC and the vehicle of CTC are not explanatorily independent of each other.  Explanatory independence is essential to the relation between a semantic account of content, and syntax; (2) C3 provides a notion of nonconceptual content which cannot be specified by a semantic theory, and, (3) there is a coherent, nonconceptualist notion of psychological explanation which can be built on top of the notion of CTC, and in terms of which we can account nonconceptually for the existence in the natural world of concept-exercising creatures.  PTC and C3 belong together in a match as tight as GOFAI[103] and LOT.

 

            But there is one point I am anticipating: that connectionist architectures are appropriate for modelling with CTC.  Unlike most of the other connections between levels of figure 2's hierarchy, this one is largely empirical.  But it is impossible to read much of the connectionist literature without having their need for a notion of nonconceptual content impressed forcefully upon one.  I shall close with some simple, merely suggestive, reasons for thinking that connectionist architectures are appropriate for C3 modelling.  These reasons can only be suggestive because this is an empirical issue, and, in any case, there is no reason why Von Neumann computational architectures cannot be used to model C3[104]: although it is natural to implement S/S theory in Von Neumann architectures, the power of these architectures goes well beyond the use that S/S theory can make of them.  It may be that PTC needs C3 more than C3 needs PTC. 

 

            Smolensky holds that in order for connectionism to model the productivity of thought, it must exploit the representational constituent structure that distributed representation makes available.  Smolensky (1987 &1988) considers (in a deliberately toy example) how connectionism represents *cup with coffee* in a structured way, without doing it as a syntactic relation between an item associated with *cup* and an item associated with *coffee*.   His example is useful for our purposes because it indicates the first reason why connectionism is naturally appropriate for C3 modelling: that connectionist systems naturally adopt perspective-dependent representation.  Here is a representation of a connectionist representation of *cup with coffee*:

 

Figure 3

 

And here is a representation of a connectionist representation of *cup without coffee*:

 

Figure 4

 

In order to see what the representation of *coffee* might be in a connectionist system, we just have to subtract the representation of *cup without coffee* from the representation of *cup with coffee*.  A representation of the result:

 

Figure 5

 

The point of this is to see that the connectionist representation of *coffee* is heavily context dependent; it is a representation of *coffee-in-the-context-of-cup*.  There will then also be representations of *coffee-in-the-context-of-instant-coffee-granules-in-a-jar*, *coffee-spilt-all-over-my-paper*, *coffee-in-the-context-of-someone-who-has-drunk-too-much-coffee-looking-rather-grey*, and so forth. Evidently, the inferential possibilities of combination of any one of these context-dependent representations of *coffee* will be limited to inferences that are appropriate in that context.  All coffee helps you digest.  But coffee-in-the-context-of-being-spilt-on-my paper has the opposite effect.  The concept *coffee* works very differently.  For example, anything that can be said about coffee can be said with words that express the concept *coffee*.  But an advertiser working for Nestle´ cannot advertise Nescafe´ by expressing *coffee-in-the-context-of-someone-who-has-drunk-too-much-looking-rather-grey makes you feel wonderful*.

           

            Connectionist representations, then, are naturally perspective dependent, and so fail to satisfy the generality constraint.  But this need be so only for the representations on the input and output units.  Much of the attention in developing connectionist algorithms has been directed to understanding how you can get useful behaviour out of systems whose input and output representations are perspective dependent.  If C3 is right, this is just what we want for modelling cognition.  Weights between hidden units can be evolved so that processes which involve them are responsive to the connections between many different perspective-dependent coffee representations.  And this is clearly encouraging in a context which has argued that our possession of the concept *coffee*, must — like all concepts — depend on a perspective reducing construction from numerous perspective-dependent representations of coffee.

 

            A second reason for thinking that connectionism is appropriate for C3 modelling is that, unlike Von Neumann architectures, “learning" is central to connectionism.  This makes it easier to think about how to implement C3 models in a connectionist architecture, because C3 takes learning to be central to, indeed, the essence of, cognition.

 

            A third reason is that in any non-toy example, connectionist representations can only be interpreted by means of an analysis of the complex connections between patterns of input and patterns of output.  It has often seemed to be a mystery  as to what kind of analysis can be given of the significance of the activity of hidden units, especially in cases where the input units are connected directly to some sensory system, and the output units are connected directly to some effector system.[105]  It was the failure to penetrate this mystery that led Fodor and Pylyshyn (1988) astray in their criticism of connectionism.  They supposed that the “sub-conceptual" space of microfeatures had to be thought of as an assignment of values to good, old-fashioned semantic features, thinly sliced:

 

Many connectionists hold that the mental representations that correspond to commonsense concepts (CHAIR, JOHN, CUP, etc.) are “distributed" over galaxies of lower level units which themselves have representational content.  To use common connectionist terminology, the higher or “conceptual level" units correspond to vectors in a “sub-conceptual" space of microfeatures.  The model here is something like the relation between a defined expression and its defining feature analysis: thus, the concept BACHELOR might be thought to correspond to a vector in a space of features that includes ADULT, HUMAN, MALE, and MARRIED; i.e. as an assignment of the value + to the first two features and of - to the last.  ... Since microfeatures are frequently assumed to be derived automatically (i.e. via learning procedures) from the statistical properties of samples of stimuli, we can think of them as expressing the sorts of properties that are revealed by multivariate analysis of sets of stimuli.. In particular, they need not correspond to English words; they can be finer-grained than, or otherwise atypical of, the terms for which a non-specialist needs to have a word.  Other than that, however, they are perfectly ordinary semantic features, much like those that lexicographers have traditionally used to represent the meanings of words [my emphases].

 

But this is due simply to Fodor and Pylyshyn's blindspot.  We may call it “conceptualism's scotoma": the representational structure of a content can only be its conceptual structure, because the only kind of content there is is conceptual content.  But I have not only shown that this is not so, I have shown that it can not be so.  C3 provides a notion of nonconceptual content, and shows how to do justice to classical constraints on cognition by means of the perspective dependence reducing transformations involved in the formation of a cognitive map.  Connectionism may not be syntactically systematic, but PTC relies on this fact to show how connectionism can provide an alternative to classical ways of modelling cognition.

 

            An objection which has been raised frequently against connectionism is of the form: a black box, even a very successful black box, adds nothing to our psychological understanding (see §1.2).  Train up a connectionist network in some stimulus environment and it may come to perform very successfully in that environment, but since the network has programmed itself via its learning algorithm, we will understand nothing about how that performance was achieved unless we are able to examine the weights on its hidden units and find an interpretation for those weights that allows us to understand why the system is successful.  The problem here is similar to a problem which classical artificial intelligence has also faced: How can a complex pattern of manipulations of LISP s-expressions constitute a psychological theory?  LOT solved that problem, but the solution which it provided is not available to connectionism because, usually, no coherent semantics can be defined over the hidden units.  It may be that all that can be determined about the functioning of a pattern of connectivity for the units is the way in which it mediates a connection between certain kinds of input activity vectors, and certain kinds of output activity vectors.  How could that advance our psychological understanding?

 

            CTC, like patterns of connectivity, is specified in terms of its powers of mediation between input and output.  These powers are powers of intentionality if it can be shown how processes implemented in them can approximately satisfy the constraints on objectivity.  Can theorists of connectionism show us how the powers of the hidden units can evolve under training to form a cognitive map?  If so, we will be able to explore empirically a new way in which to understand how things in the world are capable of thinking about the world.

 


9.3 A Speculative Conclusion: Connectionist Vehicles for Cognition

            In §9.1 we saw the kinds of computational processes of map-making and map-use which are required for increasingly perspective-independent abilities and therefore for CTC contents which come to approximate increasingly the objectivity constraints.  In §9.2 we saw some reasons for thinking that connectionism provides an appropriate architecture in which to implement these computational processes, though of course this will remain an open question for a long while yet.  But granting this, we can ask about the connectionist vehicles for CTC and cognitive maps.

 

            The representational vehicles for CTC contents are whole-system abilities, but these contents also have computational vehicles which are the sub-personal, causal ground of particular exercises of the abilities of the whole system.  It is natural to identify two kinds of connectionist vehicle: patterns of activity distributed over many processing units, and patterns of weighted connectivity between many processing units.  The former patterns are more transitory than the latter patterns, but both are causally active: the processing state of the system at the next instant is a function of both the patterns of activity and the patterns of weighted connectivity.  A pattern of activity will not, by itself, manifest any particular degree of perspective-dependence, since whether or not it is the partial ground of a perspective-dependent ability will depend on the context of the input which gave rise to the activity, the context of the output to which it will give rise, and the further dispositions of the system to which it belongs.  Nevertheless, we can, in a derivative sense, speak of the perspective-dependence of a pattern of activity over some hidden units in virtue of its role in mediating between a context-dependent input representation and a context-dependent output representation (as in the coffee example).  In this sense, the patterns of activity of very sophisticated and of very crude connectionist systems will be equally perspective-dependent.  The differences between the contents implemented in sophisticated and crude networks comes in virtue of the other kind of connectionist vehicle: patterns of connectivity.

 

            Tentatively, we may say that a pattern of connectivity implements a cognitive map if it mediates between context-dependent patterns of input and context-dependent patterns of output in a map-like way.  The map-maker had always to employ egocentric, context-dependent presentations of the terrain as the basis for his map-making, and the map-user had always to generate from the map egocentric, context-dependent instructions for moving within the terrain.  We can, therefore, think of a map as a function from egocentric, context-dependent representations to egocentric, context-dependent representations.  A sophisticated map which represents the mapped space from no particular point of view will belong to the class of functions which will enable the map-user to find his or her way around the terrain in a perspective-independent manner.  A pattern of connectivity within a connectionist network can implement a cognitive map if it implements a function from egocentric, context-dependent patterns of activity distributed over the input units to egocentric, context-dependent patterns of activity distributed over the output units.  It will implement a sophisticated, objectivity-grounding cognitive map if this function belongs to the class of functions which will enable a perspective-independent ability to negotiate the mapped domain.

 

            It is the possession of a cognitive map of the right class which is the causal ground of a perspective-independent capacity of the system to find its way around the mapped domain.  Since we saw that it is the perspective-independence of capacities to find one's way around which is the basis of the satisfaction of the objectivity constraints by those contents which are canonically specified by reference to these capacities, it is natural to identify the cognitive map as the causal ground of the possession by the system of a concept.  Hence we may say that concepts are implemented in a connectionist system as a pattern of weighted connectivity.

 

            A given set of weighted connections may implement more than one cognitive map.  Indeed we should expect this, given the holism of concepts.  (The causal ground of the logical impossibility of possessing one concept without possessing a set of conceptually connected concepts might be the fact that it is the same set, or extensively overlapping sets, of connections which implement the cognitive maps corresponding to each of these concepts.)  In an extreme case, every concept possessed by a system would be implemented in every connection of the entire system.  Concept possession would be causally legitimated by the scientific levels of a C3 framework, but conceptual characterisations of contents would play no role in the scientific psychological explanations of the behaviour of the system.[106]

 

Acknowledgements and Dedication

This paper is dedicated to the memory of my father, Manny Cussins (1905 - 1987).

 

I would like to thank for their help with the paper or with the development of its ideas: Dan Brotsky, John Campbell, David Charles, Bill Child, Ron Chrisley, Andy Clark, Michael Dummett, John Haugeland, Dimitri Kullmann, David Levy, Michael Martin, Geoff Nunberg, Gerard O'Brien, Christopher Peacocke, Jeff Schrager, Paul Skokowski, Scott Stornetta, Brian Smith, Susan Stucky, and Debbie Tatar.  I have benefited from talks at Temple University, at the 1988 Society for Philosophy and Psychology Annual Meeting in North Carolina, at the Institute for Research on Learning, at the System Sciences Laboratory at Xerox PARC, at CSLI, Stanford, at David Charles's Oriel Discussion Group, and at a Birkbeck Discussion Group on Connectionism.  I am grateful for support from a Junior Research Fellowship at New College, Oxford, a Post-doctoral Fellowship at CSLI, Stanford, resource support from the System Sciences Laboratory, Xerox PARC, and the remarkable research environment at PARC.  And above all to Charis, for her inspiration and confidence.  

 

References

Armstrong, (1968) A Materialist Theory of Mind, RKP.

Barwise, J., (1987) “Unburdening the Language of Thought" in Two Replies, CSLI Report NO. CSLI-87-74.

Bennett, J., (1976) Linguistic Behaviour, Cambridge University Press.

Block, N., (ed) (1980) Readings in Philosophy of Psychology, volume 1, Methuen.

Brachman, R.J. and Levesque, H.J., (1985) Readings in Knowledge Representation, Los Altos: Morgan Kaufmann Publishers.

Campbell, J. (1986) “Conceptual Structure" in Travis (ed) Meaning and Interpretation, Basil Blackwell.

Churchland, P.M. (1979) “Eliminative Materialism and the Propositional Attitudes", Journal of Philosophy, 78, pp. 67-90.

Churchland, P.S., (1986) Neurophilosophy: Toward a Unified Science of the Mind/Brain, Cambridge, MA: The MIT Press

Cussins (January, 1987) “Varieties of Psychologism", Synthese, 70, pp. 123-154

Cussins (May, 1987) “Being Situated Versus Being Embedded", Stanford University,  CSLI Monthly, vol 2, no 7

Cussins (1988) “Dennett's Realisation Theory of the Relation between Folk and Scientific Psychology",  Commentary on Dennett: The Intentional Stance, Behavioral and Brain Sciences, Volume 11, number 3, pp. 508-509.

Cussins (1992) “The Limitations of Pluralism" in Charles and Lennon (eds) Reduction, Explanation and Realism, Oxford: Clarendon Press

Cussins (forthcoming) “The Emergence of Objectivity: Why People are not just Complex Frogs".

Davidson, D., (1967) “Truth and Meaning" in Davidson (1984).

Davidson, D., (1974) “On the Very Idea of a Conceptual Scheme", Proceedings and Addresses of the American Philosophical Association, 47, and in Davidson (1984).

Davidson, D., (1984) Inquiries into Truth and Interpretation, Oxford: Clarendon Press.

Dennett, D., (1969) Content and Consciousness, London: Routledge and Kegan Paul

Dennett D., (1978) “Toward a Cognitive Theory of Consciousness" in Brainstorms: Philosophical Essays on Mind and Psychology, Montgomery, VT: Bradford Books

Dennett D., (1987) The Intentional Stance, Cambridge, MA: The MIT Press

Dretske, F., (1981) Knowledge and the Flow of Information, Cambridge, MA: The MIT Press

Dummett, M., (1975) “What is a Theory of Meaning" in Guttenplan (ed) Mind and Language, OUP: Clarendon Press

Dummett, M., (1976) “What is a Theory of Meaning (II)", in Truth and Meaning, ed., G. Evans and J. McDowell, Oxford: OUP.

Dummett, M., (1978) Truth and Other Enigmas, London: Duckworth

Evans (1982) The Varieties of Reference, Oxford: Oxford University Press

Evans (1980, 1985) “Things Without the Mind — A Commentary Upon Chapter Two of Strawson's Individuals" in Evans (1985), Collected Papers, OUP: Clarendon Press.

Fodor, J.A. (1976) The Language of Thought, Sussex: The Harvester Press

Fodor, J.A. (1980) “On the Impossibility of Acquiring More Powerful Structures" and “Reply to Putnam" in Language and Learning: The Debate between Jean Piaget and Noam Chomsky (ed) M. Piatelli-Palmarini, RKP.

Fodor, J.A. (1981a) “Methodological Solipsism Considered as a Research Strategy in Cognitive Psychology" in Fodor (1981b)

Fodor, J.A. (1981b) Representations, Cambridge, MA: The MIT Press

Fodor, J.A. (1983) The Modularity of Mind, Cambridge, MA: The MIT Press

Fodor, J.A (1986) “Why Paramoecia Don't Have Mental Representations," in Midwest Studies in Philosophy, X, pp. 3-23

Fodor, J.A. (1987) Psychosemantics, Cambridge, MA: The MIT Press

Fodor and Pylyshyn (1988)  “Connectionism and Cognitive Architecture", Cognition

Frege, G., (1891)  “On Sense and Reference" in Geach and Black (1960), Translations from the Philosophical Writings of Gottlob Frege, Oxford: OUP

Frege, G., (1918, 1977) “Thoughts" in Logical Investigations, (ed) P.T. Geach, Oxford: Basil Blackwell.

Gibson, J., (1986), The Ecological Approach to Visual Perception, New Jersey: Lawrence Erlbaum Associates.

Goodman, N., (1951) The Structure of Appearance, Cambridge, Mass.

Hofstadter, D.R., (1985) “Waking up from the Boolean Dream, or, subcognition as computation", in Metamagical Themas, Basic Books

Israel, D., (1987) The Role of Propositional Objects of Belief, CSLI Report No. CSLI-87-72.

Lenat, D.B. and Feigenbaum, E.A., (1987) “On the Thresholds of Knowledge", MCC-AI Non-Proprietary Technical Report.

Lewis, D., (1972) “An Argument for the Identity Theory" Australasian Journal of Philosophy.

Marr, D., (1977) “Artificial Intelligence - A Personal View", Artificial Intelligence, 9, 37-48, and reprinted in this volume.

Marr, D., (1982) Vision, San Francisco: WH Freeman

Millikan, R., (1984) Language, Thought and Other Biological Categories, MIT Press: Bradford Books

O'Keefe and Nadel (1978), The Hippocampus as a Cognitive Map, Oxford: OUP.

Peacocke, C., (1986) Thoughts: An Essay on Content, Oxford: Basil Blackwell

Peacocke, C., (1989) Inaugural Lecture, given in the Examination Schools during Trinity Term.

Peacocke, C.  (1989) “Perceptual Content" in Themes from Kaplan, edited by J. Almog, J. Perry and H. Wettstein, OUP

Pellionisz and Llinas, (1979) “Brain Modelling by Tensor Network Theory and Computer Simulation", Neuroscience, 4, pp. 323-348.

Pellionisz and Llinas, (1980), “Tensorial Approach to the Geometry of Brain Function", Neuroscience, 5, pp. 1125-1136.

Pellionisz and Llinas, (1982), “Space-Time Representation in the Brain", Neuroscience, 7, pp. 2949-2970.

Pellionisz and Llinas, (1985), “Tensor Network Theory of the Metaorganisation of Functional Geometries in the Central Nervous System", Neuroscience, 16, pp. 245-273.

Place, U.T. (1970) “Is Consciousness a Brain Process?" in Borst, ed., The Mind Brain Identity Theory, Macmillan.

Putnam, H., “The Meaning of Meaning" in Putnam, Mind, Language and Reality, CUP.

Pylyshyn, Z. (1984) Computation and Cognition, Cambridge, MA: The MIT Press

Quine, W., (1960) Word and Object, Cambridge, MA: The MIT Press

Reifel, S. (1987) “The SRI Mobile Robot Testbed: A Preliminary Report", technical note 413, SRI International, Menlo Park, CA 94025

Rosenschein, S., An Introduction to Situated Automata, forthcoming in the CSLI Lecture Note Series, Chicago Press.

Rumelhart, McClelland, and the PDP Research Group (1986) Parallel Distributed Processing, volumes 1 & 2, Cambridge, MA: The MIT Press

Schiffer, S., (1987) Remnants of Meaning, The MIT Press: Bradford Books.

Smart,  (1970) “Sensations and Brain Processes" in Borst, ed., The Mind Brain Identity Theory, Macmillan.

Smith, B., (1987) The Correspondence Continuum, Center for the Study of Language and Information, Report No. CSLI-87-71.

Smolensky, P., (1987) “The Constituent Structure of Connectionist Mental States: A Reply to Fodor and Pylyshyn", The Southern Journal of Philosophy, vol XXV1, Supplement.

Smolensky, P. (1988) “On the Proper Treatment of Connectionism", Behavioural and Brain Sciences, 11, 1-74

Smolensky, P., (1988) Department of Computer Science, University of Colorado at Boulder Technical Report on Tensor Representation.

Stich, S., (1983) From Folk Psychology to Cognitive Science: The Case Against Belief, Cambridge, MA: The MIT Press

Strawson, P. (1959) Individuals, London: Methuen

Swinburne, R.,  (1986) The Evolution of the Soul, OUP: Clarendon Press.

Winograd, T., (1973) “A Procedural Model of Language Understanding" in Computer Models of Thought and Language, ed., R. Schank and K. Colby.  San Francisco: WH Freeman.

 



[1]  For the purposes of this paper, I am simply assuming this as a premise.  There are many good reasons for taking cognitive explanation to be irreducible and indispensable.   See, for example, Fodor in Block (1980, volume 1), Fodor “Computation and Reduction" in Fodor (1981b), Fodor (1987, chapter 1), Pylyshyn (1984, chapters 1 & 2), Putnam (1973).  I argue that there must be a scientific level of cognitive explanation in Cussins (1987).

 

[2] One level of description and explanation reduces to another level only if all the explanation at the reduced level can be derived from explanation at the reducing level (thanks to David Charles).  If all the properties at one level were identical to properties at another, then it would follow that one of the levels would reduce to the other; but reduction does not require property identity.  It is important to see that the construction of cognition, which I favour (see below), does not entail the reduction of cognition, which I do not favour. 

 

[3] In the sense in which digestion is realised in the stomach.

 

[4] See footnote 1  on page 37 

[5] That is, barring a physiological breakdown, a particular token effect characterised in physiological terms, can , in principle, be derived by wholly physiological means, without having to use, for example, psychological laws or quantum mechanical laws.  There may be exceptions to this, but they are not the general case.  This is not to deny, of course, that the particular effect, under other descriptions of it, may be more satisfactorily explained non-physiologically.  And similarly for effects characterised in psychological terms.

 

[6] I develop a thought experiment to help make vivid this problem in Cussins (January, 1987).

 

[7] Cognitive reductionists include Smart (1970), Armstrong (1968), Place (1970), Lewis (1972).

 

[8] Cognitive eliminativists include Quine (1960).  Churchland, P.M. (1979), Churchland P.S. (1986) and Stich (1983) are often cited as eliminativists, but may be better thought of as recommending the elimination of conceptual content only, rather than every notion of content.

 

[9] Dennett (1987) occupies this dispensability position: there are irreducible cognitive properties, but they are not an essential part of the complete scientific explanation of human behaviour.  Strictly speaking there is no scientific psychology for Dennett because there are no psychological natural kinds (see Cussins (1988)).  Stephen Schiffer (1987) is also a dispensability theorist.  For Schiffer there are irreducible cognitive properties, but only in a pleonastic sense.

 

[10] I capitalise the first letter of “intelligible" to indicate that it expresses a semi-technical notion.  There is more discussion of this notion in Cussins (1992).

 

[11] Perceive, not infer.  Inferential connections are only ever within a level.  Perception (whether sensory or not) is the cognitive means to cross between levels.

 

[12] An Intelligible connection between 2 levels does not involve laws between the levels, nor does it involve a third level of description in terms of which the connection between the two principal levels is understood.  This is why I emphasised the practical character of the architect's ability.  The connection between two levels is Intelligible if the marching in step of the two levels does not appear as a miraculous coincidence (Cussins, January 1987).  One aspect of this is that somebody who grasps the Intelligible connection should have a fallible, practical capacity to arrange a system at the lower level so as to satisfy upper level constraints, and should have some idea about the circumstances under which upper level performance will degrade.

 

I can see no good reason why the distinction between Intelligible gaps and miraculous gaps being merely intuitive (depending on, for example, what we recognise as Intelligible) should be a problem.  The construction constraint is a constraint on explanation.  It may be the case that what it is to be explanatory can itself only be explained in terms of what the creatures, for whom it is an explanation, find explanatory.  It would be very nice to be able to say what an Intelligible gap amounts to, in a way which goes beyond this, but for my purposes it is sufficient that we can tell of any gap between levels, whether it is Intelligible or miraculous.  The construction constraint, like Tarski's Convention T, is a criterion for success, the satisfaction of which we are able to recognise.  And, like Tarski's convention, it rests on an undefined notion.  (This parallel was brought to my attention by David Charles). 

 

[13] I give a miraculous coincidence thought experiment in Cussins (January, 1987).

 

[14] Philosophers of mind will be interested to note that the construction constraint is, in one way, weaker than supervenience (even supervenience on the entire physical world, rather than just the cranium), and, in another way, stronger.  It is weaker in that it does not require that a lower level in a construction is sufficient for an upper level, as the subvenient level must be sufficient for the supervenient level.  So, in connection with section (2), we might note against LOT that syntax does not determine semantics, because there will always be distinct semantic interpretations of, for example, the connectives given the same syntax and proof theory (Williams, unpublished). Nevertheless, the fact that syntax preserves semantic constraints, under any of the possible semantic interpretations, may be sufficient to account for the marching in step of explanations of behaviour which employ semantic notions (psychological explanation), with explanations of behaviour which are due to the computational implementation of syntax.  Hence, although syntax does not determine semantics, this fact cannot rule out the use of the syntax/semantics theory of representation in a construction of cognition.

The construction constraint is stronger than supervenience in that it imposes demands on explanation that supervenience does not (supervenience does not require that there be an Intelligible connection between levels).

One advantage, to me, of the construction constraint over supervenience is that I know how to argue for the construction constraint, but I do not know how to argue for supervenience.

 

[15] For some writers, such as Swinburne (1986), the marching in step doesn't just appear to be, but is miraculous: God must be invoked in order to account for the behavioural coherence of a person.

 

[16] Throughout, I use the term “construction" in the technical sense employed by the construction constraint  A relation between two adjacent levels may be a construction relation, even though the construction constraint requires more than two levels.  It will be a construction relation iff it is Intelligble.  The naturalistic constraint on a framework of theories is not to make the relation between every two levels a construction relation, but rather to satisfy the construction constraint.  The relation between non-adjacent levels in a framework which satisfies the construction constraint may not be an Intelligble relation, for a grasp of it may depend on grasp of the theory of the intermediate levels.

 

[17] There are the beginnings of construction theories based on the communication theoretic notion of information (Dretske 1981).  The trouble with these theories, as with behaviourism, is that although the notion of information is suited to a low level in a construction, nobody has yet shown how it is possible, even in principle, to construct a level of concept possession on top of the informational level.  Dretske has a go in chapter 7 of his book, but unfortunately the attempt fails, (Cussins, forthcoming).  Other non-reductionistic, non-eliminativist theoretical frameworks include Millikan (1984), but it is not yet clear how this framework could yield a construction.

 

[18] For LOT afficianados: A number of issues within LOT have revolved around whether a special notion of semantics is required for psychological interpretation.  The special notion is generally called “narrow semantics", which, under psychological interpretation, becomes “narrow content" (Fodor (1987)).  Narrow content does not fully determine reference.  Narrow contents may have to be general, while all the singular aspects of content are taken to be broad phenomena, and narrow content may be restricted to the presentation of observational properties to the exclusion of natural kind properties.  But the point is that these innovations are heavily constrained: Narrow content and narrow semantics must be narrow: they must form a subset of classical content and classical semantics.

 

[19] It is even consistent with this to suppose that there were simple kinds of content around in the world, before there were any experiencing subjects — but more of this later.

 

[20]In this context, this is just stipulation.  Some people working in non-Fregean semantical traditions, rather than a Dummettian / Strawsonian tradition, will find my use odd, which is why I have begun with this stipulation.  I need to have a notion like my notion of content — whatever it is called —  because part of the problem of embodied cognition is to explain how there can be certain physical creatures, like us but unlike paramoecia, whose response to the world does not consist wholly in their response to physical stimulations of their sensory surfaces, but which rests, in part, on a conception of how the world is.

 

[21] How could there be such a way?  Well, this is what I am devoting much of the paper to trying to explain.  So far in this section I take myself to have given only a pocket account of the notions of content, conceptual and nonconceptual.  The rest of this section begins on an analysis, and gives an argument for the existence of non-conceptual content, while in section (7) I make more precise the notion of a content's presenting the world objectively as consisting of objects, properties and situations.  The claims in this section don't tell us what content is; they are intended just to give an intuiitive feel for the notions.  Later, we will see that conceptual content is the availability in experience of a task domain, and nonconceptual content is the availability in experience of substrate domain abilities.

 

[22] Following, with some differences, the usage in Bennett (1976) and by Brian Smith (1987).

 

[23] NB: these are not yet definitions of two kinds of content.

 

[24] Something is canonically characterised (within a theory) if, and only if, it is characterised in terms of the properties which the theory takes to be essential to it.  A game of football, for example, is canonically characterised, in the Football Association, in terms of the notions employed in the rules of the game, not in terms of temporal patterns of disruption to the playing field.  A content is canonically characterised by a specification which reveals the way in which it presents the world.  See below.

 

[25] I use asterisks, like quotation marks, to indicate that the enclosed words do not have their normal reference.  But asterisks indicate that the words refer to the concept or concepts, or other kind of content, that the words express, rather than to the linguistic items themselves.

 

[26] Notice the difference between instantiating or satisfying or falling under a concept, on the one hand, and possessing a concept on the other.  I possess the concept *bachelor*, but I don't fall under that concept.

 

[27] Not a content property, obviously.

 

[28] The task domain objects, properties and situations are presumed to be fully objective in the sense that it is presumed that it is, in principle, possible to explain what it is for them to exist in a way which is independent of any explanation of what it is for organisms to recognise or perceive or act on them.  (It will then turn out that the notion of a task domain is an idealisation).  It is important to see that a task domain is entirely abstracted from any perceiver or subject.  There is no point of view in a task domain, no essentially indexical elements.

 

[29] One might suppose that a task domain is simply a part of the world.  But this is not so, because a task domain is a part of the world under a given conceptualisation.  Not only does the world permit of many different true conceptualisations, it also permits of registrations which are NOT conceptualisations (I shall argue).

 

[30] See Winograd, T., (1973).

 

[31] For such a game to be playable, it would have to be supplemented with new rules, such as the rule of obligatory capture: if, on a turn, a player can capture an opponent's piece, then he must do so.  But this does not alter the point that an intelligent capacity to play a game (unlike a conventional computer's capacity) entails the capacity to adapt to be able to play related games, whose task domains may differ from each other and from that of the original game.

 

[32] We shouldn't assume that because the state has a linguistic expression, that therefore it has only one kind of content: linguistic content isn't a kind of content, only a kind of expression of, or vehicle for, content.  It turns out that we need more than one kind of content to do justice to our language use.  At this stage in the paper, I am trying to be neutral on this point.

 

[33] See, for example, Dummett (1975) & (1976), Davidson (1967), Evans (1982, chapters 1-4).

 

[34] Having probabilistic truth conditions is one way to have determinate truth conditions.  When Quine argued that the reference of “gavagai" was indeterminate (Quine 1960), he did not mean that it referred with a certain probability to rabbit, and with a certain probability to rabbit-stage, and with a certain probability to connected-rabbit-parts.  From my perspective, fuzzy set theory and probabilistic emendations of semantic theories do not offer us a notion of content different from conceptual content.  Rather, they provide a way in which a state or item, etc., may have its conceptual content probabilistically.  A coin tossed in a task domain may come up heads with probability 1/2.   Task domains are fully determinate, not deterministic.

 

[35] I am not prejudicing the issue of whether there is more than one kind of content.  I am noticing a certain constraint within the theory of content and calling “a content" the content which satisfies this constraint.  Later, I introduce a different constraint within the theory of content, and call “b content" the content which satisfies this new constraint. This leaves it open that a content may be identical to b content.

 

[36] Or, a determinate contribution to determinate truth conditions.  I shan't continue to make this qualification.

 

[37] It has been a philosophical convention since Frege (cf Frege, 1977) that *thinking* is a psychological concept, whereas *thought* is a logical or philosophical notion.  *Concept*, like *thought*, is, in the first instance, a logical notion: concepts are thought constituents.  So saying “my thought that p" within the convention, entails that the content of the state is conceptual.  Saying, merely, “my thinking that p" does not entail any consequence about the kind of content that the state has.

 

Part of what I am addressing in the paper is the question whether *concept* should, as well as being a logical notion, also be a psychological notion.  Psychology, I am assuming, must employ some notion of content, but I will suggest that the kind of content which is conceptual content has only a logico-philosophical role; psychology requires a different kind of content — non-conceptual content.

 

[38] Frege (1891).

 

[39]  See footnote 1 on page 24

 

[40]  See, for example, Peacocke (1986).  Frege added the further condition on sense, that it determine reference.  This is not, however, a condition on b content.  Only certain b contents (those that are senses) determine reference.

 

[41] Of course, the people I cite don't put their conclusions this way!

 

[42] Frege's intuitive criterion of difference: The thought grasped in one cognitive act, x, is different from the thought grasped in another cognitive act, y, if, and only if, it is possible for some rational person at a time to take incompatible attitudes to them; ie. accepting (rejecting) one while rejecting (accepting) or being agnostic about the other.

 

[43] cf. footnote 1 on page 18 where I say that the notion of a task domain prescinds from any notion of indexicality.  There is no point of view in a task domain, so if point of view is essential to indexicality, a notion of content for which indexicality is essential cannot be captured by means of concepts of the task domain.

 

[44] Are there any representational states which don't contain, either explicitly or implicitly, indexical or demonstrative elements?  Perhaps *God is good*, because it is part of the essence of God that He is unique.  (Just about every definite description contains an implicit indexical reference to, for example, our earth).  But does “good" mean “good to us", “good from our point of view, rather than, say, the devil's"?

 

[45] The theorist can refer to the mode of presentation in question without employing it, but this doesn't help.  What is in question is the kind of explanation of the nature of these contents that a scientific psychologist can give or use, if the psychologist is restricted to conceptual kinds of specification, and accepts as the explanatory task the need to construct any psychologically indispensable notion of content.  The trouble is that if the specification is canonical, the theorist's capacity to understand the nature of the content in question depends, ineliminably, on his or her having had similar experiences.  Thus, conceptual specification of these contents which is both canonical and theoretically adequate fails because there are only two ways to conceptually specify such contents: by means of concepts of the task domain, or by use of the indexical or demonstrative term where the understanding of the use of the term depends on either sharing the experiential environment, or having had similar experiences.  Perry's and Peacocke's arguments show that the first method of specification cannot be canonical for b contents, while the ineffable dependence on having had certain sorts of experiences shows that the second method of specification cannot be theoretically adequate.  (Thanks to Christopher Peacocke for pointing out this worry to me.)

  

[46] It may be objected that I am imposing overly strict explanatory demands on a theory of content.  I consider this objection in Cussins (1992).

 

[47] For example Dretske's notion of information (Dretske 1981) would be a notion of non-conceptual content, were it to be a notion of content, because one does not need to possess the concepts of information in order to be in information carrying states.  (Evidently so, since even trees – for that matter, anything at all – carry information).  The trouble comes when Dretske tries to justify the notion of information as a notion of content.  Peacocke (1989) develops a different notion of nonconceptual content.

 

[48] See Reifel (1987).

 

[49] I don't want to beg the question as to what kind of representational system is sufficient for the possession of concepts, so, in the discussion of systems such as Flakey, I use a general notion of representation, which is neutral with respect to whether its significance (eg. its semantics) is only extrinsically attributed, or whether its significance (like that of content) is intrinsically available.  I consider the conditions for a physical system to have states whose significance is intrinsically available in section (7).

 

[50] As certain theorists in AI do; see, for example, Lenat and Feigenbaum (1987).

 

[51] See Barwise (1987).

 

[52] The American defence department funding agency for “advanced research projects".

 

[53] See Cussins (May, 1987).

 

[54] Evidently, what abilities are part of the s-domain will be relative to a particular task domain.  Knowledge of visual information processing algorithms is not part of many people's task domain, but it was part of David Marr's.  Hence what kind of content some state possesses, will be relative to the kind of evaluation which is appropriate to it: the particular way of dividing up t and s domains for the particular case.  There may be more than one task domain for a single state at a time.

 

[55] For that matter, the subject may also be incapable of moving.  A way of moving may be available in my experience, even though I am incapable of acting on the basis of it.  (The content would still be canonically characterised in terms of its constitutive connections to perception and action).

 

[56] In the sense of “objective" which I make clear in § 7 and § 8.  Basically, pain experience is less objective because it is less perspective-independent.

 

[57] For some exposition of this term, see the discussion of the map-maker in §9.1.

 

[58] See Dennett (1969) pp. 93-94, and Dennett (1978) pp.101-2, 153-4, 219.

 

[59] Not just for a behaviourist, actually.  The cognitive revolution may have reinstated the notion of representation, but it hasn't yet reinstated experience (my notion of content).  I hope by this article to push us a little way towards doing that.

 

[60]   It should be noted that there may be kinds of b content which are not canonically specified by means of concepts of the s-domain, or, more narrowly, by means of concepts of ways of finding one's way in the environment; if so, these will be kinds of content which are not kinds of conceptual content.  Their status will depend on how it is proposed to canonically capture them.

 

[61] In Cussins (forthcoming) I extend the argument from the case of indexical and demonstrative senses to all senses.

 

[62] Remember that *conceptual* does not equal *conscious*.  Of course recognition of Charis does not depend on matching to consciously stored features; this is not the point I am making.  In claiming that the psychological structure of recognition of individuals is its non-conceptual structure, rather than its conceptual structure, I am claiming that a computational model of individual recognition must be suited to transforming representations which have non-conceptual, rather than conceptual, content.  Much of this computational transformation of representations will normally, of course, be quite unconscious.

 

[63] See Evans (1982), chapter 8.

 

[64] A typical S/S system will not, strictly speaking, have conceptual contents, since it won't, strictly speaking, be a concept-exercising system.  Such a use of the notion of conceptual content is a derivative use, which must ultimately be explained in terms of the paradigmatic use of the notion with respect to a subject of experience and thought (see pages 14 and 15).  Nevertheless, derivative uses of the notions of content may have considerable utility.

 

[65] The theory of the vehicles of CTC might take many different forms: a cybernetic account, an information processing account, an ecological account (Gibson 1986), or a tensor theoretic account (Pellionisz and Llinas 1979, 1980, 1982, 1985) and Smolensky (1988, Colorado University, Computer Science Technical Report).  But the theory of the representational vehicles of content should not be confused with a theory of content.  We should not speak of cybernetic, or information-theoretic contents, only of cybernetic or information-theoretic vehicles of content.  In section (8) I draw a distinction between representational vehicles of nonconceptual contents which are abilities, and connectionist computational vehicles of nonconceptual contents which are patterns of activation distributed over processing units. 

 

[66] The “analytic hierarchy of concepts" just means the ordering of concepts due to the relations of one-way logical dependency between concepts.  Thus the concept *spectacles* logically or analytically depends on the concept *eye*, but not vice-versa.  The concept *bachelor* depends on the concept *adult*, but not vice-versa.

 

[67] See, for example, Brachman and Levesque (1985).

 

[68] See section (4.323), and the next footnote.

 

[69] The reader will have noticed that I am using the word “objective" more and more.  A very rough idea is that something is objective if it is independent of a subject's grasp of it.  Paining properties are not objective in this sense, neither are university degrees (?), but being triangular is.  I shall be less rough about the notion in section (7).

 

[70] x implements y if x is a substrate for y, but that the explanation of what it is to be y is independent of (does not in any way rely on) the explanation of what it is to be x.  A basic dogma of cognitive science (rejected by eg. Churchland (1986)) has been that neuro-physiology implements cognition: cognitive facts may be true in virtue of neuro-physiological facts, but the explanation of the nature of cognitive facts is quite independent of the explanation of neuro-physiological facts.  People speak of a “mere" implementation theory, not because its provision would not be both major and desirable, but because it would not shed any light on the psychological nature of cognition.

 

[71] For philosophers: Notice that the conceptualist's explanatory independence between levels is compatible with supervenience.  A much more difficult question is whether it is compatible with the construction constraint.  Ultimately I think not, but showing this depends on showing that all conceptualist attempts to construct concepts, such as LOT and functionalism, fail.

Fodor rejects total explanatory independence of the conceptual and nonconceptual levels because, for example, he thinks that conceptual level phenomena of opacity have to be explained by reference to the syntactic form of the cognitive representations (“Propositional Attitudes" in Fodor (1981)).

 

[72] The claim is that inference is just a special case of the general phenomenon of systematicity of contents.  Inference is the systematicity of thought contents.

 

[73] (1) and (4) are recognised in Fodor (1976, 1980, 1981), (2) is recognised in Fodor (1983), (3) - like the other elements - is implicit in the practice of much of AI, and (5) and (6) are exploited in Fodor & Pylyshyn (1988).

 

[74] It doesn't follow that we require a theory based on the psychological and computational use of CTC, since there may be other kinds of non-conceptual content.  But my aim throughout the paper is to indicate how a certain kind of theory is possible, rather than necessary (even though I believe that too!)

 

[75] I explain why this is so in section (7).

 

[76] And thus distinguishes the content from an elongated colour-coded candy content, because, in a taskdomain, elongated colour coded candies are different objects from traffic lights (although they might occupy the same location).  This is because a task domain is a region of the world under a given conceptualisation.

 

[77] By colour experience which is divorced from a certain theoretical background, I mean colour experience considered apart from a subject's knowledge that one might, for example, individuate colours by their matching conditions to all objects. (Goodman 1951).  Such basic colour experience is governed by the two principles which I have adapted from Peacocke (1986).  (Because these principles make reference to the causal conditions of colour perception, basic colour experience is not primitive colour experience which would be experience considered apart from a subject's knowledge that it is brought about causally in a certain way.)  Considerable logical sophistication might give rise to a coherent referent for colour contents, but these colour contents will no longer be the colour contents of basic colour experience, but rather the contents of super-sophisticated set-theoretical concepts.  Basic colour contents are essentially observational: one can tell, normally, just by looking, whether they apply to the world. 

 

[78] This argument is given in Peacocke (1986), which makes use of the argument in Dummett (1978, “Wang's Paradox").

 

[79]  There are numerous attempts to define artificially a non-paradoxical colour referent.  Some of these may be successful, but all of them involve some departure from our intuitive practice.  The artificial referent will not be determined by the cognitive significance of our basic colour contents, (as sense should determine reference).  As mentioned in footnote ?? on page ??, I am concerned here only with basic, observational, colour contents of experience.

 

[80] “We can imagine a series of judgements “Warm now", “Buzzing now", made by a subject in response to changes in his sensory state, which have no objective significance at all.  But we can imagine a similar series of judgements, prompted by the same changes in the subject's sensory state, which do have such a significance: “Now it's warm",  “Now there's a buzzing sound" — comments upon a changing world.  What is involved in this change of significance?"  (Evans 1980).  The exchange between Strawson (1959, chapter 2), Evans (1980) and Strawson's response to Evans (1980) is a classic discussion of adopting the mental perspective on the emergence of objectivity.

 

[81] Where a little stage-setting goes on by laying out some claims and counter-claims in a largely metaphorical fashion.

 

[82] An objection raised by Ned Block during a discussion at Birkbeck.

 

[83] I have in mind, here, objections raised by Andy Clark and Martin Davies.  Not that they are committed to conceptualism!

 

[84] Then, too, there are important cognitive phenomena which are dependent on linguistic—and other communicative—vehicles.  Recognition of this is entirely within the spirit of C3 (§9).

 

[85] And we do hear very little about it from psychology.  Certain areas of developmental psychology, especially Piagetian psychology, is one of the few exceptions.

 

[86] See Fodor (1986)

 

[87] This approach, like Evans's, is influenced by chapter 2 of Strawson (1959).

 

[88] A way of talking which has utility but not truth.  It may be useful to speak of the weather being fierce, even though the weather is not the kind of thing which can be fierce, strictly (ie. truthfully) speaking.  It is often held that the ascription of beliefs to thermostats is similarly instrumentalistic.  Dennett (1978) holds that the ascription of beliefs even to us is instrumentalistic.

 

[89] For philosophers: The notions of possibility and necessity here are notions that are grounded in descriptive metaphysics (Strawson (1959)).

 

[90] For discussion of this notion, see Campbell (1986).

 

[91] See Evans (1982), chapter 5.   An information link is a link between an organism and an object by means of which the organism receives information about the object.  One's judgements and movements may be responsive to changes in properties of an object on the basis of an information link.  The content of the information which is picked up by the organism is not conceptual.

 

[92] What is captured in the theory of contents' canonical specification of the cognitive significance of the content.

 

[93] See Putnam (1975).

 

[94] It is interesting how inadequate pain memory is.  Although one can remember, non-conceptually, being in pain, the memory is often highly deceptive with respect to the intensity of the pain.  It is said that there would only be singleton children if this wasn't the case.

 

[95] Note that this is not to say that there are any ways of cognizing the world which are not from a particular perspective.  Every way of cognising is from a perspective, but only some ways of cognising present the world as (approximately) being the kind of thing which could be available to any conceptual perspective.

 

[96] or Situated within, in the sense of Cussins (May, 1987).

 

[97] When I first moved to Palo Alto, I used to recognise the street off El Camino on which I lived, from amongst the indiscriminable buildings and roads on my side of the highway, by means of the purple colour of a Taco stand on the corner of the street.  One day I drove several miles past my street.  The local housing association had objected to the colour, and required the owner of the Taco stand to repaint with a shade of grey.

 

[98] For examples of people who think just the reverse, ie. that the route to intelligent cognition is by exploiting the features of particular task domains, see Israel (1987), or Barwise (1987) or Rosenschein (forthcoming).

This paragraph gives a reason why learning is the core of cognition, and why “problem solving" (in the GPS sense) is at the periphery.  It is important to realise that perspective independence, like the objectivity constraints, is an ideal to which no physical system can perfectly approximate; my Palo Alto abilities will always be route-contaminated.  The point is that this ideal identifies the dimension along which different systems may be assessed for their conceptuality, and therefore for their intelligence.  To have one's theory of intelligence (like Situated theory) make task domain dependence a virtue (so that the way in which to design successful systems is to Situate them in a task domain) is to abandon psychological theorising, however good engineering it may be.  A theory counts as a scientific psychological theory if, and only if, it shows what it would be to alter a physical system so that it is more nearly conceptual.  My Palo Alto capacity, although route-contaminated, is a conceptual capacity because it is essentially a part of a flexible, learning system that, with experience, is moving my ability along the dimension of greater and greater perspective-independence.  I was slow, but after the first couple of months I did manage to recognise my street without depending on a feature or landmark, like the colour of the Taco stand.

 

[99] More strictly: “in virtue of being canonically specified by means of perspective-dependent abilities".  This expansion should be read into the shorter form of expression, whenever it is used.

 

[100] The idea here is that correlative with the spectrum of increasingly sophisticated nonconceptual contents, there is a spectrum of increasingly sophisticated norms which can apply to the non-conceptual contents.  True/False can apply only to conceptual contents (and thus to states whose non-conceptual content is sufficiently perspective-independent to approximately satisfy the objectivity constraints).  But there are lesser norms which can apply to lesser nonconceptual contents.  In Britain it is wrong to drive on the right, and right to drive on the left.  But consider a time near the beginning of the century.  There was no right or wrong with respect to which side of the road one drove on.  As there came to be more and more traffic, a convention of driving on the left started to establish itself.  During this intermediate phase, one would not be wrong to drive on the right, (as one now is) but one would be a menace.  Being a menace is being judged by a lesser norm than being wrong, but a norm nevertheless.

 

[101]  See Evans (1982), chapter 6.  The phrase “a view from anywhere" was used by Brian Smith to capture a difference from Nagel's “The View from Nowhere".

 

[102] See Rumelhart, McClelland and the PDP Research Group (1986).

 

[103] Haugeland's term for Good Old Fashioned Artificial Intelligence.

 

[104] It would still be “C3": The Computational Construction of Concepts.

 

[105] It is sometimes thought that this difficulty is confined to the significance of individual hidden units, rather than to activity vectors over many units.  The reason for this is that it is assumed that although the hidden units may well have no conceptual significance, the activity vectors will.  I hope that C3 will show that this is not necessary either.  The psychological analysis of cognition is nonconceptual through and through.  The conceptual level of description merely provides psycho-computationally inert constraints on the psycho-computationally causally active processes.

 

[106] So if this account provides a sense in which concepts have scientific reality, it is a sense which is entirely compatible with considerable indeterminacy of conceptual content.