Minds Brains And Programs Essay Topics

Searle: Minds, Brains and Programs

From: Hosier Adam (ash198@ecs.soton.ac.uk)
Date: Tue Mar 06 2001 - 10:15:59 GMT


Adam Hosier < >

Searle, John. R. : Minds Brains and Programs (1980)
http://www.cogsci.soton.ac.uk/bbs/Archive/bbs.searle2.html

Hosier:
The best way to summarise the paper, 'Minds Brains and Programs (1980)'
 by Searle is to quote his own abstract:

SEARLE:
> This article can be viewed as an attempt to explore the consequences
> of two propositions. (1) Intentionality in human beings (and animals)
> is a product of causal features of the brain I assume this is an
> empirical fact about the actual causal relations between mental
> processes and brains. It says simply that certain brain processes are
> sufficient for intentionality. (2) Instantiating a computer program
> is never by itself a sufficient condition of intentionality The main
> argument of this paper is directed at establishing this claim The form
> of the argument is to show how a human agent could instantiate the
> program and still not have the relevant intentionality.

Hosier:

The argument that Searle will put forward has come to be known as the
classic Chinese Room argument. It is this argument that Searle uses to
answer the following question.

SEARLE:
> "Could a machine think?" On the argument advanced here only a machine
> could think, and only very special kinds of machines, namely brains
> and machines with internal causal powers equivalent to those of brains
> And that is why strong AI has little to tell us about thinking, since
> it is not about machines but about programs, and no program by itself
> is sufficient for thinking.

Hosier:
Notice the main point that Searle is trying to make is not that AI is
not possible, merely that it is not possible for a conventional
computational based computer program to 'be intelligent'. In particular
Searle is attacking the Strong AI concept, which he goes on to define as,

SEARLE:
> But according to strong AI, the computer is not merely a tool in the
> study of the mind; rather, the appropriately programmed computer really
> is a mind, in the sense that computers given the right programs can
> be literally said to understand and have other cognitive states.

Hosier:
Searle goes on to give an example of a program by Roger Schank,
(Schank & Abelson 1977). He describes this program as follows.

SEARLE:
>The aim of the program is to simulate the human ability to understand
> stories. It is characteristic of human beings' story-understanding
> capacity that they can answer questions about the story even though
> the information that they give was never explicitly stated in the
> story. Thus, for example, suppose you are given the following story:
> -A man went into a restaurant and ordered a hamburger. When the
> hamburger arrived it was burned to a crisp, and the man stormed out
> of the restaurant angrily, without paying for the hamburger or leaving
> a tip." Now, if you are asked -Did the man eat the hamburger?" you
> will presumably answer, ' No, he did not.'

Hosier:
In simple words the program can answer queries from a knowledge base
 to a similar level that a human would answer questions about the same
 story. Searle goes on to say that,

SEARLE:
> Partisans of strong AI claim that in this question and answer sequence
> the machine is not only simulating a human ability but also
>
> 1. that the machine can literally be said to understand the story and
> provide the answers to questions, and
>
> 2. that what the machine and its program do explains the human ability
> to understand the story and answer questions about it.

Hosier
I am not sure what 'partisans' Searle is referring to, but I think it
is an obvious assumption that they are wrong. These days no one would
interpret an 'expert system', as having cognitive states - and the
program by Roger Schank is essentially just an expert system. However
in order to disprove the 'Partisans of strong AI's claim Searle
creates the Gedankenexperiment - the Chinese Room Argument, (CRA). I
believe that this argument is so effective that beyond serving it's
initial purpose of de-mystifying expert systems it can also be
levelled at all computation based efforts at creating AI. This is in
fact what Searle does go on to express.

Searle's CRA can be briefly described as follows:
Suppose that an exclusively English speaking person is locked in a room
and then given a set of rules for responding to Chinese script. Now
suppose that by following these rules the person can take as input,
Chinese writing and give as output Turing indistinguishable responses
i.e. The person can read and write Chinese. (N.B. This is not to say
that the person understands what he is doing.)

This argument seems flawed only in the respect that although a program
could be made to give the appropriate symbolic responses to some symbol
inputs surely there can be no possible program that could give adequate
responses to all symbol inputs. For instance, humans have trouble
answering 'what is the meaning of life', as would any AI solution.
However more fundamentally than this it seems obvious that a symbol
responding system could not just be a set of rules. The system would
have to include some kind of experience 'history' so that questions
based on previous questions could be answered. As well as this it would
seem to need some kind of actual symbol grounding within the physical
world, so that actual 'meaning' could be attached to the input and
output symbols. It is this 'meaning' that Searle suggests is lacking
from any computational AI system. With respect to the earlier claims,
Searle argues:

SEARLE:
> 1. As regards the first claim, it seems to me quite obvious in the
> example that I do not understand a word of the Chinese stories. I have
> inputs and outputs that are indistinguishable from those of the
> native Chinese speaker, and I can have any formal program you like,
> but I still understand nothing.

Hosier
Thus he argues that Schank's program does not 'understand'. I agree.

SEARLE:
> 2. As regards the second claim, that the program explains human
> understanding, we can see that the computer and its program do not
> provide sufficient conditions of understanding since the computer and
> the program are functioning, and there is no understanding. But does
> it even provide a necessary condition or a significant contribution
> to understanding?

Hosier
Again I agree. As an engineer I am not particularly concerned with
whether it provides a 'contribution to understanding', I see AI
engineering as method of solving problems rather than a problem to be
solved. However I believe that there is in fact a great deal to be
learned from AI and that humans probably are slowly reverse
engineering the brain. (This would seem to be a logical consequence of
the fact that the main source of empirical intelligence data comes
from experiments on human or animal brains.)

SEARLE:
> Notice that the force of the argument is not simply that different
> machines can have the same input and output while operating on
> different formal principles -- that is not the point at all. Rather,
> whatever purely formal principles you put into the computer, they
> will not be sufficient for understanding, since a human will be able
> to follow the formal principles without understanding anything.

Hosier
The previous statement by Searle seems to summarise his entire argument
well. He then goes on to talk about how humans extend their own
'intentionality' on to the tools we create, for instance, "The door
knows when to open because of its photoelectric cell". Maybe humans do
this, and maybe even though we do this, we don't actually think the
door has any kind of understanding. However this is diverging from
Searle's main point that there is and can be no understanding in a
pure symbol system.

Searle then goes on to discuss a number of counter arguments to his
theory. ("Now to the replies:"). I will not repeat all of them here,
as I believe in most cases Searle's replies to these arguments are
correct. One particularly interesting reply given by Searle concerns
the 'Robot reply (Yale)'. Essentially the counter to the CRA is that
if the computational system involved some kind of grounding with the
real world through sensory-motor interaction with the world it would
be infallible to the CRA, i.e. Searle can become a rule system, but
he cannot become a 'robot'.

SEARLE:
> The first thing to notice about the robot reply is that it tacitly
> concedes that cognition is not solely a matter of formal symbol
> manipulation, since this reply adds a set of causal relation with
> the outside world.

Hosier
The above quote seems to give a clue about what Searle thinks will be
needed by a true AI system. However Searle's reply to the 'robot
problem' seems to be correct. In summary Searle suggests that although
it is not physically possible to become the whole robot system, it is
still possible to become the computational core. For instance the
addition of sensory-motor input and output within the real world can
simply be seen as more meaningless input and output symbols for the
computational core.

SEARLE:
> But the answer to the robot reply is that the addition of such
> "perceptual" and "motor" capacities adds nothing by way of
> understanding, in particular, or intentionality, in general, to
> Schank's original program.

> I am receiving "information" from the robot's "perceptual" apparatus,
> and I am giving out "instructions" to its motor apparatus.

> [the robot], it is simply moving about as a result of its electrical
> wiring and its program. And furthermore, by instantiating the program
> I have no intentional states of the relevant type. All I do is follow
> formal instructions about manipulating formal symbols.

Hosier
This seems to me to be a correct answer to the robot reply. It also
leads me to a point I have been considering - I know that in humans
some of the senses seem to extend so far as to actually become part of
the brain, such as the connection between the eyes and the brain.
However at the end of the day, the human brain 'core' simply deals
with all the meaningless symbols that come from the human senses, such
as the eyes or skin. Thus it would seem to me that the human brain
only has symbols as input? (As well as the internal ability to record
and playback these inputs in order and thus have some temporal
awareness.)

Searle also makes the following controversial statement regarding the
Turing test,

SEARLE:
> The only motivation for saying there must be a subsystem in me that
> understands Chinese is that I have a program and I can pass the
> Turing test; I can fool native Chinese speakers. But precisely one
> of the points at issue is the adequacy of the Turing test.

Hosier
The adequacy of the Turing test is not in question. In my opinion the
Turing test does not prove a system's intelligence, or prove some
particular facet of how the system works - such as does it understand?
It is designed simply to prove that from the point of view of the
external examining entity, if a system is indistinguishable from
another then it might as well be the first system.

In fact Searle does in fact seem to elaborate on this point without
realising that the Turing test is not an indication of a particular
type or method of intelligence.

SEARLE:
> If strong AI is to be a branch of psychology, then it must be able to
> distinguish those systems that are genuinely mental from those that
> are not. It must be able to distinguish the principles on which the
> mind works from those on which nonmental systems work; otherwise it
> will offer us no explanations of what is specifically mental about
> the mental. And the mental-nonmental distinction cannot be just in
> the eye of the beholder but it must be intrinsic to the systems.

Hosier
Searle then starts on the 'beliefs' of systems.

SEARLE:
> The study of the mind starts with such facts as that humans have
> beliefs, while thermostats, telephones, and adding machines don't. If
> you get a theory that denies this point you have produced a
> counterexample to the theory and the theory is false.

Hosier
However Searle does not expand on the word 'belief'. I have beliefs. I
do not really know why I have most of them - I simply have them
without much understanding - perhaps I am simply following a rule or a
pattern, in the same way as a computer might. Searle obviously does
not see this as an option. Searle repeatedly suggests the example of a
hunk of metal on the wall 'not having beliefs' - well this would seem
obvious. But what about a computer program that predicts various
probabilities of the weather tomorrow. Perhaps these probabilities are
the simplified basis for the 'strong or weak beliefs' that humans
have. I do not think it is right to entirely dismiss this part of
Strong AI.

Searle also comes up with a strange idea about AI, when he is answering
the 'many mansions reply (Berkeley).

SEARLE:
> I really have no objection to this reply save to say that it in effect
> trivialises the project of strong AI by redefining it as whatever
> artificially produces and explains cognition.

Hosier
I am assuming that Searle is strictly taking the definition of strong
AI here to mean computational based systems only. Also I do not think
that explaining cognition and producing artificial intelligence is a
trivial project. Again this all comes back to Searle's original and
only real argument, that a rule based computational system based on
formally defined elements cannot be intelligent.

Searle concludes his paper with a number of simple questions and
philosophical answers. These lead to his final question to which he
believes the answer is no.

SEARLE:
> "But could something think, understand, and so on solely in virtue of
> being a computer with the right sort of program? Could instantiating
> a program, the right program of course, by itself be a sufficient
> condition of understanding?"

Hosier
To elaborate on this would duplicate what has been written, as the CRA
seems to confirm the falseness of the above statement. The missing
element Searle seems to suggest is some form of causal system uniquely
within the brain that gives the intelligence inherent to it. He then
tries to explain why so many people have become confused over the issue
of AI and whether understanding can really be shown in a computational
system. He tries to answer this by assuming that all AI systems are
merely simulations of the real thing. As such he believes the
simulation can't actually be the real thing. In the examples he gives
this is obviously the case:

SEARLE:
> The idea that computer simulations could be the real thing ought to
> have seemed suspicious in the first place because the computer isn't
> confined to simulating mental operations, by any means. No one
> supposes that computer simulations of a five-alarm fire will burn
> the neighbourhood down or that a computer simulation of a rainstorm
> will leave us all drenched.

Hosier
However consider a film where the actors are drenched in 'simulated
rain' or when a person surfs a simulated wave. These seem more real
and in some ways are real - but again they are simulated. I really
believe that before Searle tries to argue against the possibility of
an intelligent system being made purely from symbols and rules he
needs to know more about how humans work. When I wince at pain, I
supposedly had an 'experience' and applied meaning to the elements
of the situation, as well as this I had to be self-aware in order
to have this experience. What if I winced because my instinctive
reaction was to wince, (cause and effect possibly from a rule), my
experience was simply to record the physical environment and my own
internal state at the time of the pain. And the symbols were
grounded through my senses - in other words I now know the 'red
flame' that burnt me will burn me again if I put my hand near it.
As for being self aware, clearly I know it was me who just felt the
pain, but I do not understand anything about how I work - or we
would not be having this debate, so how self aware am I?

I have digressed here and actually I agree with Searle that a
simulation of intelligence can not actually be intelligence. But this
is certainly not to say that all work in AI involves simulating
intelligence, and it is also not correct to think that AI will not
help understanding of how human minds work.

SEARLE:
> Whatever it is that the brain does to produce intentionality, it
> cannot consist in instantiating a program since no program, by
> itself, is sufficient for intentionality.

> "Could a machine think?" My own view is that only a machine could
> think, and indeed only very special kinds of machines, namely brains
> and machines that had the same causal powers as brains. And that is
> the main reason strong AI has had little to tell us about thinking,
> since it has nothing to tell us about machines

Hosier:
Thus although in principle although I agree with Searle's main point
that a simple rule based system cannot be intelligent, I do not agree
that it is nothing to do with intelligence - it could well be a piece
of the puzzle. I certainly do not agree that it has had or will have
little to tell us about thinking.

Adam Hosier < >



This archive was generated by hypermail 2.1.4 : Tue Sep 24 2002 - 18:37:21 BST

Anderson, J. (1980) Cognitive units. Paper presented at the Society for Philosophy and Psychology, Ann Arbor, Mich. [RCS]

Block, N. J. (1978) Troubles with functionalism. In: Minnesota studies in the philosophy of science, vol. 9, ed. Savage, C. W., Minneapolis: University of Minnesota Press. [NB, WGL]

Block, N. J. (forthcoming) Psychologism and behaviorism. Philosophical Review. [NB, WGL]

Bower, G. H.; Black, J. B., & Turner, T. J. (1979) Scripts in text comprehension and memory. Cognitive Psychology11: 177–220. [RCS]

Carroll, C. W. (1975) The great chess automaton. New York: Dover. [RP]

Cummins, R. (1977) Programs in the explanation of behavior. Philosophy of Science44: 269–87. [JCM]

Dennett, D. C. (1969) Content and consciousness. London: Routledge & Kegan Paul. [DD, TN]

Dennett, D. C. (1971) Intentional systems. Journal of Philosophy68: 87–106. [TN]

Dennett, D. C. (1972) Reply to Arbib and Gundérson. Paper presented at the Eastern Division meeting of the American Philosophical Association. Boston, Mass. [TN]

Dennett, D. C. (1975) Why the law of effect won't go away. Journal for the Theory of Social Behavior5: 169–87. [NB]

Dennett, D. C. (1978) Brainstorms. Montgomery, Vt.: Bradford Books. [DD, AS]

Eccles, J. C. (1978) A critical appraisal of brain-mind theories. In: Cerebral correlates of conscious experiences, ed. Buser, P. A. and Rougeul-Buser, A., pp. 347–55. Amsterdam: North Holland. [JCE]

Dennett, D. C. (1979) The human mystery. Heidelberg: Springer Verlag. [JCE]

Fodor, J. A. (1968) The appeal to tacit knowledge in psychological explanation. Journal of Philosophy65: 627–40. [NB]

Dennett, D. C. (1980) Methodological solopsism considered as a research strategy in cognitive psychology. The Behavioral and Brain Sciences3:1. [NB, WGL, WES]

Freud, S. (1895) Project for a scientific psychology. In: The standard edition of the complete psychological works of Sigmund Freud, vol. 1, ed. Strachey, J.. London: Hogarth Press, 1966. [JCM]

Frey, P. W. (1977) An introduction to computer chess. In: Chess skill in man and machine, ed. Frey, P. W.. New York, Heidelberg, Berlin: Springer-Verlag. [RP]

Fryer, D. M. & Marshall, J. C. (1979) The motives of Jacques de Vaucanson. Technology and Culture20: 257–69. [JCM]

Gibson, J. J. (1966) The senses considered as perceptual systems. Boston: Houghton Mifflin. [TN]

Gibson, J. J. (1967) New reasons for realism. Synthese17: 162–72. [TN]

Gibson, J. J. (1972) A theory of direct visual perception. In: The psychology of knowing ed. Royce, S. R. & Rozeboom, W. W.. New York: Gordon & Breach. [TN]

Graesser, A. C.; Gordon, S. E.; & Sawyer, J. D. (1979) Recognition memory for typical and atypical actions in scripted activities: tests for a script pointer and tag hypotheses. Journal of Verbal Learning and Verbal Behavior1: 319–32. [RCS]

Gruendel, J. (1980). Scripts and stories: a study of children's event narratives. Ph.D. dissertation, Yale University. [RCS]

Hanson, N. R. (1969) Perception and discovery. San Francisco: Freeman, Cooper. [DOW]

Hayes, P. J. (1977) In defence of logic. In: Proceedings of the 5th international joint conference on artificial intelligence, ed. Reddy, R.. Cambridge, Mass.: M.I.T. Press. [WES]

Hobbes, T. (1651) Leviathan. London: Willis. [JCM]

Hofstadter, D. R. (1979) Gödel, Escher, Bach. New York: Basic Books. [DOW]

Householder, F. W. (1962) On the uniqueness of semantic mapping. Word18: 173–85. [JCM]

Huxley, T. H. (1874) On the hypothesis that animals are automata and its history. In: Collected Essays, vol. 1. London: Macmillan, 1893. [JCM]

Kolers, P. A. & Smythe, W. E. (1979) Images, symbols, and skills. Canadian Journal of Psychology33: 158–84. [WES]

Kosslyn, S. M. & Shwartz, S. P. (1977) A simulation of visual imagery. Cognitive Science1: 265–95. [WES]

Lenneberg, E. H. (1975) A neuropsychological comparison between man, chimpanzee and monkey. Neuropsychologia13: 125. [JCE]

Libet, B. (1973) Electrical stimulation of cortex in human subjects and conscious sensory aspects. In: Handbook of sensory physiology, vol. II, ed. Iggo, A., pp. 743–90. New York: Springer-Verlag. [BL]

Libet, B., Wright, E. W.Jr, Feinstein, B., and Pearl, D. K. (1979) Subjective referral of the timing for a conscious sensory experience: a functional role for the somatosensory specific projection system in man. Brain102:191–222. [BL]

Longuet-Higgins, H. C. (1979) The perception of music. Proceedings of the Royal Society of London B 205:307–22. [JCM]

Lucas, J. R. (1961) Minds, machines, and Gödel. Philosophy36:112–127. [DRH]

Lycan, W. G. (forthcoming) Form, function, and feel. Journal of Philosophy. [NB, WGL]

McCarthy, J. (1979) Ascribing mental qualities to machines. In: Philosophical perspectives in artificial intelligence, ed. Ringle, M.. Atlantic Highlands, N.J.: Humanities Press. [JM, JRS]

Marr, D. & Poggio, T. (1979) A computational theory of human stereo vision. Proceedings of the Royal Society of London B 204:301–28. [JCM]

Marshall, J. C. (1971) Can humans talk? In: Biological and social factors in psycholinguistics, ed. Morton, J.. London: Logos Press. [JCM]

Marshall, J. C. (1977) Minds, machines and metaphors. Social Studies of Science7:475–88. [JCM]

Maxwell, G. (1976) Scientific results and the mind-brain issue. In: Consciousness and the brain, ed. Globus, G. G., Maxwell, G., & Savodnik, I.. New York: Plenum Press. [GM]

Maxwell, G. (1978) Rigid designators and mind-brain identity. In: Perception and cognition: Issues in the foundaions of psychology, Minnesota Studies in the Philosophy of Science, vol. 9, ed. Savage, C. W.. Minneapolis: University of Minnesota Press. [GM]

Mersenne, M. (1636) Harmonie universelle. Paris: Le Gras. [JCM]

Moor, J. H. (1978) Three myths of computer science. British Journal of the Philosophy of Science29:213–22. [JCM]

Nagel, T. (1974) What is it like to be a bat?Philosophical Review83:435–50. [GM]

Natsoulas, T. (1974) The subjective, experiential element in perception. Psychological Bulletin81:611–31. [TN]

Natsoulas, T. (1977) On perceptual aboutness. Behaviorism5:75–97. [TN]

Natsoulas, T. (1978a) Haugeland's first hurdle. Behavioral and Brain Sciences1:243. [TN]

Natsoulas, T. (1979b) Residual subjectivity. American Psychologist33:269–83. [TN]

Natsoulas, T. (1980) Dimensions of perceptual awareness. Psychology Department, University of California, Davis. Unpublished manuscript. [TN]

Nelson, K. & Gruendel, J. (1978) From person episode to social script: two dimensions in the development of event knowledge. Paper presented at the biennial meeting of the Society for Research in Child Development, San Francisco. [RCS]

Newell, A. (1973) Production systems: models of control structures. In: Visual information processing, ed. Chase, W. C.. New York: Academic Press. [WES]

Newell, A. (1979) Physical symbol systems. Lecture at the La Jolla Conference on Cognitive Science. [JRS]

Newell, A. (1980) Harpy, production systems, and human cognition. In: Perception and production of fluent speech, ed. Cole, R.. Hillsdale, N.J.: Erlbaum Press. [WES]

Newell, A. & Simon, H. A. (1963) GPS, a program that simulates human thought. In: Computers and thought, ed. Feigenbaum, A. & Feldman, V., pp. 279–93. New York: McGraw Hill. [JRS]

Panofsky, E. (1954) Galileo as a critic of the arts. The Hague: Martínus Nijhoff. [JCM]

Popper, K. R. & Eccles, J. C. (1977) The self and its brain. Heidelberg: Springer-Verlag. [JCE, GM]

Putnam, H. (1960) Minds and machines. In: Dimensions of mind, ed. Hook, S., pp. 138–64. New York: Collier. [MR, RR]

Putnam, H. (1975a) The meaning of “meaning.” In: Mind, language and reality. Cambridge University Press. [NB, WGL]

Putnam, H. (1975b) The nature of mental states. In: Mind, language and reality. Cambridge: Cambridge University Press. [NB]

Putnam, H. (1975c) Philosophy and our mental life. In: Mind, language and reality. Cambridge: Cambridge University Press. [MM]

Pylyshyn, Z. W. (1980a) Computation and cognition: issues in the foundations of cognitive science. Behavioral and Brain Sciences3. [JRS, WES]

Pylyshyn, Z. W. (1980b) Cognitive representation and the process-architecture distinction. Behavioral and Brain Sciences. [ZWP]

Russell, B. (1948) Human knowledge: its scope and limits. New York: Simon and Schuster. [GM]

Schank, R. C. & Abelson, R. P. (1977) Scripts, plans, goals, and understanding. Hillsdale, N.J.: Lawrence Erlbaum Press. [RCS, JRS]

Searle, J. R. (1979a) Intentionality and the use of language. In: Meaning and use, ed. Margalit, A.. Dordrecht: Reidel. [TN, JRS]

Searle, J. R. (1979b) The intentionality of intention and action. Inquiry22:253–80. [TN, JRS]

Searle, J. R. (1979c) What is an intentional state?Mind88:74–92. [JH, GM, TN, JRS]

Sherrington, C. S. (1950) Introductory. In: The physical basis of mind, ed. Laslett, P., Oxford: Basil Blackwell. [JCE]

Slate, J. S. & Atkin, L. R. (1977) CHESS 4.5 – the Northwestern University chess program. In: Chess skill in man and machine, ed. Frey, P. W.. New York, Heidelberg, Berlin: Springer Verlag.

Sloman, A. (1978) The computer revolution in philosophy. Harvester Press and Humanities Press. [AS]

Sloman, A. (1979) The primacy of non-communicative language. In: The analysis of meaning (informatics 5), ed. McCafferty, M. & Gray, K.. London: ASLIB and British Computer Society. [AS]

Smith, E. E.; Adams, N.; & Schorr, D. (1978) Fact retrieval and the paradox of interference. Cognitive Psychology10:438–64. [RCS]

Smythe, W. E. (1979) The analogical/propositional debate about mental representation: a Goodmanian analysis. Paper presented at the 5th annual meeting of the Society for Philosophy and Psychology, New York City. [WES]

Sperry, R. W. (1969) A modified concept of consciousness. Psychological Review76:532–36. [TN]

Sperry, R. W. (1970) An objective approach to subjective experience: further explanation of a hypothesis. Psychological Review77:585–90. [TN]

Sperry, R. W. (1976) Mental phenomena as causal determinants in brain function. In: Consciousness and the brain, ed. Globus, G. G., Maxwell, G., & Savodnik, I.. New York: Plenum Press. [TN]

Stich, S. P. (in preparation) On the ascription of content. In: Entertaining thoughts, ed. Woodfield, A.. [WGL]

Thorne, J. P. (1968) A computer model for the perception of syntactic structure. Proceedings of the Royal Society of London B 171:377–86. [JCM]

Turing, A. M. (1964) Computing machinery and intelligence. In: Minds and machines, ed. Anderson, A. R., pp. 4–30. Englewood Cliffs, N.J.: Prentice-Hall. [MR]

Weizenbaum, J. (1965) Eliza – a computer program for the study of natural language communication between man and machine. Communication of the Association for Computing Machinery9:36–45. [JRS]

Weizenbaum, J. (1976) Computer power and human reason. San Francisco: W. H. Freeman. [JRS]

Winograd, T. (1973) A procedural model of language understanding. In: Computer models of thought and language, ed. Schank, R. & Colby, K.. San Francisco: W. H. Freeman. [JRS]

Winston, P. H. (1977) Artificial intelligence. Reading, Mass. Addison-Wesley; [JRS]

Woodruff, G. & Premack, D. (1979) Intentional communication in the chimpanzee: the development of deception. Cognition7:333–62. [JCM]

0 Thoughts to “Minds Brains And Programs Essay Topics

Leave a comment

L'indirizzo email non verrà pubblicato. I campi obbligatori sono contrassegnati *