Philosophy of Technology: Human vs Machine – Week 4

The main focus this week was the Edge question – What to Think About Machines That Think. We read and discussed a range of articles that addressed this question.

We started the session by discussing the question, ‘What is thinking?’. We gave Kerry lots of different ways we think, such as analysing, creative, imagination, rational and about the self. We decided that thinking is a hard term to define and that if we think that what humans think is thinking and we see that a machine can also think this way then machines are also capable of thinking. One difference is that machines/computers do not have emotion or experiences. The way humans think draws on personal experience, emotion and much more. In other words we are a subjective species, whereas machines are objective.

As humans we are also very aware of ourselves as a physical entity – phenomenology. Computers are not aware of themselves and they have no conscious. They also do not feel pleasure or pain as we do. However, a machine does have sensors so it can experience the real world through these input devices, but that is just data collection basically. Anything that requires data processing then computers are the best, but real experience they do not do. Kerry looked worried when she realised that computers could be great wine tasters, another human job gone!

RobotWine.jpg

We then read part of an article by Steven Pinker titled ‘Thinking does not imply subjugating’. A quote from the article is below:

“Just as inventing the car did not involve duplicating the horse, developing an AI system that could pay for itself won’t require duplicating a specimen of Homo sapiens.”

he is making the point that when we invent and create we are not trying to copy something from nature completely. The cars we invented do not exhibit other behaviours of a horse, much like inventing the airplane without building in the behaviour of a bird. We have a fear that if something exhibits human behaviour then it will run amok and take over the world, which just isn’t the case.

The next article we discussed was by Matt Ridley and is called ‘Among the machines, not within the machines’. A great quote from this article is below:

“The true transforming genius of human intelligence is not individual thinking at all but the collective, collaborative, and distributed intelligence – the fact that it takes thousands of different people to make a pencil, not one of whom knows how to make a pencil.”

He points out that what truly changed the human race into world dominators was the invention of exchange and specialisation – the network effect. We all really liked this article and agreed with Ridley’s ideas. He argues that if we had remained largely autonomous people we would still be living in caves living as hunter-gatherers. And our greatest invention is the Internet as it connected large numbers of computers together, the Internet is the true machine intelligence.

“Where machine intelligence will make the most difference is among the machines, not within the machines.”

The last article we discussed was by Thomas Metzinger and was called ‘What if they need to suffer?’. He made the point that we’re smart because we hurt, we regret and we know we are mortal beings. In other words, we care! He asked the question whether good AI will also need to care about itself and other things. If AI has its own thoughts, will these thoughts matter to them? We talked about if we could make a machine capable of suffering, should we do it? Kerry and the rest of us were pretty much against it. What would be the point? It isn’t ethical anyway. The slave trade was all about human suffering, slaves were dehumanised, and there have been other examples of suffering that we would not wish on anybody. Kerry said it was too close to playing God. As humans we all suffer, some more than others, and a lot of suffering is unfair and cruel and nasty. Kerry said that when we have children we know they are going to suffer, it is inevitable, so should we have children? If we know a baby is going to be born with a terrible disease and it is going to suffer for 5 years and then die should we have that baby? The author came up with a set of conceptual constraints that should be treated as objects of ethical consideration. Any system that satisfied all constraints should be treated ethically. For example, an unconscious robot cannot suffer.

This was another very interesting week of the course that raised some important issues. Half way through the course now, just 4 weeks to go!

 

Advertisements

4 thoughts on “Philosophy of Technology: Human vs Machine – Week 4

  1. I have two remarks (if I may…). The first is about “naturalness” and “playing God”. In your report it sounds like you just left that argument stand as that. Didn’t you discuss what it implies (to even compare human activity to that of a dogmatic divine entity)? This strain of argumentation often comes up in the debates on bio- and nanotechnologies in one or the other way, from both religious and atheist discourse participants. I guess, Kerry mentioned it thinking of “being in the position to decide over a baby’s life or death”. This brings me to the second point, inspired by your question “If we know a baby is going to be born with a terrible disease and it is going to suffer for 5 years and then die should we have that baby?”: What if an artificial intelligence by mapping and evaluating future trajectories and analysing potentiality fields comes to the conclusion that mankind is a threat for this planet, and – given its current state and extrapolated progress – will go extinct anyway in about a century? “Would we still have it (on this planet)?” Maybe it is better to kill them all to prevent further anthropogenic damage being inflicted on the ecosystem? How would we respond?

    • Sorry, forgot one thing: Maybe we shouldn’t program AIs solely with consequentialistic ethical routines and algorithms! Same as we shouldn’t overestimate our own power of overseeing all the factors that make a life of a future baby worth living…

    • Thanks for the reply. To answer your first point, we did not discuss in detail the ‘playing God’ comment. We talked about that the human race suffers very much and that this makes us more intelligent. We also discussed issues such as testing medicine or products on humans, animals and plants and how if it were for the greater good, such as to cure cancer, then it would be acceptable. So a utilitarian perspective.
      For your second point I would respond with Isaac Asimov’s three laws of robotics and the first law in particular “A robot may not injure a human being or, through inaction, allow a human being to come to harm.” It is in our interest to design and build AI with our own interests in mind and build in safeguards. As AI gets more advanced maybe we don’t know the full extent of this and the possibilities. I know that Professor Stephen Hawking has expressed concern over AI and how “Every aspect of our lives will be transformed. In short, success in creating AI could be the biggest event in the history of our civilisation.” AI and robots killing humans does sound like science fiction, because why would they decide to do that? Why would it be in their interests?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s