Philosophy of Technology: Human vs Machine – Week 6

This week’s session focused on AI and Machine Ethics.

Kerry began the session by talking about a recent article and an interview with Elon Musk on Lateline talking about humans merging with machines. You can access the article in question by clicking here.

Screen Shot 2017-03-03 at 7.16.10 pm.png

This idea is not new and has been around for over 50 years. Progress is very slow due to the challenges of incorporating hardware with organic systems. The article is an interesting read and we discussed the possibility of an AI deep learning system becoming so advanced that it might decide that humans are a bad idea for the survival of the planet. Kerry likened this to the fate of the Rapa Nui people who used up all of the resources of Easter Island and as people began to starve war broke out among the tribes.

Screen Shot 2017-03-03 at 7.29.12 pm.png

We went on to talk about the Turing Test and the two imitation games devised by Alan Turing. The first test is to tell the difference between a man and a woman, the second test is between a man and a computer. The point of the test is can you be tricked into telling the difference. It tries to answer the question is the machine behind the curtain actually thinking? And is its thinking human enough to fool you? We know that computers only display a small amount of human behaviour, can they trick you into thinking that they have completely human emotions and thoughts.


Loebner Prize Gold Medal – a prize for an artificial intelligence contest to implement┬áthe Turing Test. First prize is $100,000 and this gold medal for the first computer whose responses were indistinguishable from a human’s.

Kerry made the link to Rene Descartes and how we cannot trust our senses, “It is necessary that at least once in your life you doubt, as far as possible, all things” Rene Descartes.


Rene Descartes


Parallel Lines Optical Illusion

We moved off AI and started to talk about machine ethics. We agreed that machines are designed and built in a way so that they are safe to operate and won’t harm humans, remember Asimov’s three laws of robotics. Even down to your toaster, it was designed in an ethical manner. We talked about how most things, especially services, are only available online and this causes problems for people that do not have access to technology and the online world. We heard of examples of older people that do not have a mobile phone or the internet and so cannot access some services. It seems that today if you don’t have a mobile phone or internet then you’re stuffed! The fact that services are online only puts some people at a disadvantage, this raises ethical questions. You have to think is there an alternative way of doing things.

Other questions arose about social interactions. We can go through much of our daily lives without the need for any social interaction at all. All of the things we require we can get by accessing technology, we don’t even need to speak to anybody at Woolies as the checkouts are now computerised. Online social interaction is highly popular, such as with Facebook and Twitter, but these websites easily allow bullying to take place. Other issues around anonymity and cyber crime are also relevant here.

We then spoke about the privacy issue, basically should we have privacy or not? We decided it was a generational issue. Older people firmly believe in privacy of information and not handing over personal information. Younger people do not seem to have an issue with it, they think it is worth the trade off to live in a digitally advanced age. We spoke about a recent decision by President Trump to allow his staff’s phones to be checked to make sure they were not leaking information to the media. So even the US Government does not think people should have privacy. We decided that this was a better option than being tortured for information in the days before digital communication and mobile phones. We also decided that Governments need some privacy. Absolute transparency can be very dangerous as information can be mistreated, misrepresented and misused.

Screen Shot 2017-03-03 at 9.32.53 pm.png

Screen Shot 2017-03-03 at 9.38.01 pm.png

We finished this session talking about driverless cars and how they will decide on who to protect in the event of an accident. If the choice of greater harm is to the passenger or a person on the street, who is the car going to choose to protect in the event of a crash. Kerry had the idea of when you get into a driverless car before the journey starts you have to take a test to work out how important you are. The more important or valuable you are to society the more the car will protect you in the event of a crash. So if you’re Albert Einstein or a heart surgeon you will be fine! We liked the idea of this ranking system but thought it would be easy to cheat on the test so not sure how it would work in reality.

Just two more session to go.