This week both lectures continued on from part one and the debate between strong AI and weak AI. This week we also looked at what is an argument and what makes a good argument.
An argument is a list of sentences. These sentences are called premises and they need not be true. If they are not true the argument is said to be imperfect. If the premises are true the conclusion will be true. It is impossible for the premises to be true and the conclusion false.
P1 Socrates is a Martian
P2 If Socrates is a Martian then Plato is from Venus
C Plato is from Venus
The above argument is valid, however, it is not true. The premises are false, therefore, this argument is unsound. A sound argument is a valid argument with true premises. A conclusion is a logical consequence of some premises. When analysing an argument you should always ask is it possible for the conclusion to be false and the premises true?
The most recognisable form of a valid form of argument is modus ponens. This is a rule of logic that says if a conditional statement (if P then Q) is accepted, and the antecedent P) holds, then the consequent (Q) may be inferred.
If P then Q
The main point of the final lecture of part II was to find another strong argument against strong AI. Much of the lecture looked at the Churchlands response and the Mind-Body problem. This is the problem of how to understand the relationship between conscious experience and the brain. Are they different stuffs? The mind a kind of spirit and the brain flesh? Or are they same the same.
In respect to strong AI the Churchlands also claim that strong AI is false. They say that “rule-governed symbol manipulation can never constitute semantic phenomena”, in response to Searle’s Chinese Room thought experiment which also claims strong AI is false. So, the understanding being generated in the Chinese room, the level of Chinese understanding must be a very low grade of understanding, the kind that is too feeble for us to appreciate. No wonder when we look at the Chinese room there’s no Chinese understanding there, because there is hardly any. This view of strong AI is not completely the same as the view of a proponent of strong AI would have, so for now this argument is put on hold.
Searle’s next argument is also questioned. In particular axiom (or premise) 3 is questioned – syntax by itself is not sufficient for semantics. The argument does not mention the Chinese room, so what role is the Chinese room playing? In fact, it is playing the role of something that is just syntax, not semantics. So there might be another way of securing axiom 3. In discussing this axiom further Alex goes on to give an example of a computer floating around in deep space with a program loaded on to it so that the computer believes that the president of MIT is Rafael Reif (this is true). How can this isolated computer with no connection to MIT whatsoever belief that Rafael is the president of MIT? It turns out there is a twin Earth that is molecule for molecule the same as our Earth. It also has a president Reif at MIT. Why doesn’t the computer that is equally disconnected to the twin MIT believe the twin Reif is the president of MIT? There seems to be no good answer to this question. In favour of the premise mentioned earlier we would say that the computer is just manipulating symbols and the symbols don’t mean anything in particular, that is all that is going on. The system doesn’t have any actual beliefs about anybody.
Finally, the Robot Reply. This says that if you put the Chinese room in the skull of a robot and gave the robot senses and allowed it to move around then if you set things up properly it would have Chinese understanding. Searle’s reply to this is that if this were the case and he were shuffling symbols inside the room inside the robot he still has no way of attaching meaning to any of the symbols. The fact that the robot is engaged causal interaction with the outside world won’t help. So this is saying that Searle won’t understand Chinese and neither will the robot. The robot reply is strongish-AI, Searle does have a point against strong AI.
Some pretty challenging thinking so far which I have tried to summarise here. Looking forward to next week.