The main focus this week was the Edge question – What to Think About Machines That Think. We read and discussed a range of articles that addressed this question.
We started the session by discussing the question, ‘What is thinking?’. We gave Kerry lots of different ways we think, such as analysing, creative, imagination, rational and about the self. We decided that thinking is a hard term to define and that if we think that what humans think is thinking and we see that a machine can also think this way then machines are also capable of thinking. One difference is that machines/computers do not have emotion or experiences. The way humans think draws on personal experience, emotion and much more. In other words we are a subjective species, whereas machines are objective.
As humans we are also very aware of ourselves as a physical entity – phenomenology. Computers are not aware of themselves and they have no conscious. They also do not feel pleasure or pain as we do. However, a machine does have sensors so it can experience the real world through these input devices, but that is just data collection basically. Anything that requires data processing then computers are the best, but real experience they do not do. Kerry looked worried when she realised that computers could be great wine tasters, another human job gone!
We then read part of an article by Steven Pinker titled ‘Thinking does not imply subjugating’. A quote from the article is below:
“Just as inventing the car did not involve duplicating the horse, developing an AI system that could pay for itself won’t require duplicating a specimen of Homo sapiens.”
he is making the point that when we invent and create we are not trying to copy something from nature completely. The cars we invented do not exhibit other behaviours of a horse, much like inventing the airplane without building in the behaviour of a bird. We have a fear that if something exhibits human behaviour then it will run amok and take over the world, which just isn’t the case.
The next article we discussed was by Matt Ridley and is called ‘Among the machines, not within the machines’. A great quote from this article is below:
“The true transforming genius of human intelligence is not individual thinking at all but the collective, collaborative, and distributed intelligence – the fact that it takes thousands of different people to make a pencil, not one of whom knows how to make a pencil.”
He points out that what truly changed the human race into world dominators was the invention of exchange and specialisation – the network effect. We all really liked this article and agreed with Ridley’s ideas. He argues that if we had remained largely autonomous people we would still be living in caves living as hunter-gatherers. And our greatest invention is the Internet as it connected large numbers of computers together, the Internet is the true machine intelligence.
“Where machine intelligence will make the most difference is among the machines, not within the machines.”
The last article we discussed was by Thomas Metzinger and was called ‘What if they need to suffer?’. He made the point that we’re smart because we hurt, we regret and we know we are mortal beings. In other words, we care! He asked the question whether good AI will also need to care about itself and other things. If AI has its own thoughts, will these thoughts matter to them? We talked about if we could make a machine capable of suffering, should we do it? Kerry and the rest of us were pretty much against it. What would be the point? It isn’t ethical anyway. The slave trade was all about human suffering, slaves were dehumanised, and there have been other examples of suffering that we would not wish on anybody. Kerry said it was too close to playing God. As humans we all suffer, some more than others, and a lot of suffering is unfair and cruel and nasty. Kerry said that when we have children we know they are going to suffer, it is inevitable, so should we have children? If we know a baby is going to be born with a terrible disease and it is going to suffer for 5 years and then die should we have that baby? The author came up with a set of conceptual constraints that should be treated as objects of ethical consideration. Any system that satisfied all constraints should be treated ethically. For example, an unconscious robot cannot suffer.
This was another very interesting week of the course that raised some important issues. Half way through the course now, just 4 weeks to go!