Thursday, January 20, 2011

The Chinese Room

Title: 
Minds, Brains, and Programs

Comments:
one: Aaron Kirkes
two: Stephen Morrow

Reference: 
Searle, John. R. (1980) Minds, brains, and programs. Behavioral and Brain Sciences 3 (3): 417-457

Summary: 
In this article, Searle discussed the question of whether or not a program can be sufficient for intentionality.  He uses the base example of himself, an English native speaker with no knowledge of the Chinese language, locked inside a room with only a set of instructions in English (which he can understand).  When he is passed under the door Chinese symbols, the question is whether or not the set of instructions will be sufficient to convince a native Chinese speaker he speaks Chinese (which he in fact does not).  Searle outlines the guidelines for his argument - the constraints and assumptions - and then proceeds to respond to common arguments against his point.  He concedes at no point but those he sees irrelevant to the problem at hand.  He concludes that there is not such a set of instructions that would be sufficient to allow a machine understanding.

Discussion: 
In my opinion, Searle makes well thought out and thorough arguments both in his set up and in response to the arguments against his position.  The whole topic reminded me of Skynet in the Terminator series.  It seemed to me that although he laid out an easy to understand definition of what type of "understanding" he was speaking of in the beginning of the paper that most of the arguments against his point boiled down once again to the definition of understanding.  In his refutations, he was able to exclude all the information that went past his presuppositions in terms of understanding.  I found it interesting in the end where he did state his stance that there was some biological process in which our intentionality comes from that we cannot reproduce.  I would have assumed that he would just leave it up to speculation his own stance and allow the audience to formulate their own conclusion, but I appreciate him being bold in actually voicing his stance.
One fault that I see according to my taste in writing style is that he often strays tangent to the point he is trying to convey.  I know he is probably doing that for completeness, but it seems as though that drew away from his point.  Specifically in the Systems Reply I found it hard to see how he was relevantly tying all of his thoughts together.  It just seemed to be a jumble of thought.
I like how he responded to arguments that people had in regard to his topic.  Future work may expound on more arguments people have brought to him since his 1980 paper.  They may also include the current state of computers compared to what they were back then.  I think that would be along the lines of seeing the natural progression of technology in an effort to extrapolate an idea of where technology could end up.  This may play significant in the argument for using different "stuff" to formalize a machine like us to have intentionality.

No comments:

Post a Comment