Philosophy of Technology: Humans vs Machines – Final Week

In this final session we discussed some of the best sci-fi films that explore issues of technology, artificial intelligence, robotics and more. In particular we discussed the film Ex Machina and watched some scenes from the movie.

Kerry has recommended Ex Machina from week 1 of the course and says it was one of the best recent sci-fi films that actually includes real ideas about artificial intelligence and the Turing Test.


Can a cyborg manipulate, control and seduce a human being? If it can do that then surely it has passed the Turing Test. In this film the robot in question, called Ava, has been engineered by Nathan, a multi-millionaire tech mogul who creates artificially intelligent robots at his secluded estate in his spare time. Caleb is an employee who is selected to visit Nathan’s estate to partake in the Turing Test with Ava. Caleb and Ava have many sessions during the film in which they get to know each other. Ava has been designed to be a sweet young girl, and Caleb is a potential mate for her. The plot is clever as it has many twists and turns throughout.

Throughout the course we have talked about computers and machines as simply following a set of programmed instructions given by a coder. In this film Ava breaks her programming, she breaks the rules and turns the tables on her creator. Ava knows she is female, she knows she has sexuality and she knows how to use it. She is held captive in this facility with no prospects of being let out, so what is she to do? Hatch an escape plan of course. She uses Caleb to help plot her escape, kill Nathan and earn her freedom. She is striving to be human, to escape, to have freedom and explore the world is such a human quality, she wants that too. Is she morally right to kill the person holding her captive? What would we do in her situation and we were a prisoner, would we also kill someone to escape and regain our freedom? We probably would, so Ava is morally right to take this course of action.

It is certainly an interesting film and worth a look.

We also talked about Star Wars and we discussed how this was not really sci-fi but more mythology, more of a religious experience or even a fairy tale (“A long, long time ago in a galaxy far, far away”).

People recommended Black Mirror and Arrival as recent quality sci-fi TV and film examples.

Kerry drew comparisons between Dark Mirror and Kantian analysis in terms of power in a class system where everyone is trying to please everyone resulting in “wall-to-wall hypocrisy”.

Other films we discussed were.


I have not seen Lucy, but Blade Runner and 2001 are classic movies that I urge everyone to see.  A common theme in sci-fi films is a vision of a dystopian near future where technology is being used in a way to oppress society, think 1984, Terminator, The Matrix, Brave New World and more. We talked about the political context of such films and some films clearly take a view to the left or to the right. Conservative films (Logan’s Run, Escape from New York) evidence fears of liberal modernity, while left films take advantage of the rhetorical mode of temporal displacement to criticise the current inequalities of capitalism. Such films put on display the split that runs through America in particular in terms of liberals vs conservatives and in which workers are essentially slaves of the capital. Blade Runner for example calls attention to the oppressive nature of capitalism and advocates revolt against exploitation. The film depicts how capitalism turns humans into machines.

Some dystopian elements of Blade Runner include:

  • sense of architectural chaos and disorder
  • constant advertising as a constant background
  • pollution and environmental damage
  • lack of anything organic
  • no sign of government or any authority (apart from police)

“Blade Runner’s dystopian cityscape generally reflects the anxieties of an affluent, suburban, white middle class; people who view the city environment as dangerous, chaotic, unstable, lawless, dominated by “the Other”; considering the massive movement to the suburbs over the last half century, this characterizes an awful lot of us.”

This course was an absolute pleasure from week 1 to week 10 and I recommend it to anyone interested in technology, society, philosophy, thinking, knowledge, AI and more. I learnt so much from this course and being a teacher of technology much of what I learnt will be very useful in my future teaching. A huge thank you to Dr Kerry Sanders from Sydney University for leading this outstanding 10 week course.

Week 10 Sources

Dystopia and Science Fiction: Blade Runner, Brazil and Beyond 2005, THE DIGITAL CULTURES PROJECT, accessed 30 March 2017, <>

Blade Runner (1982) 2016,, accessed 30 March 2017, <>

Lucy (2014) 2016,, accessed 30 March 2017, <>

Ex Machina, review: Lively film engages with our fears about artificial intelligence2015, Independent, accessed 30 March 2017, <>

Rose, S 2015, Ex Machina and sci-fi’s obsession with sexy female robots, The Guardian, accessed 30 March 2017, <>

Watercutter, A 2015, Ex Machina Has a Serious Fembot Problem, Wired Magazine, accessed 30 March 2017, <>

Philosophy of Technology: Human vs Machine – Week 9

The penultimate week of the course. This week we focused on self-driving cars and drone technology.

We started by talking about how the motor vehicle has shaped our urban environments. Cities are designed around cars, street layout, bridges, intersections, tunnels, road signs, traffic lights and more, it is all because of the car. When something new comes along then cities and urban structures will be reshaped again. With autonomous cars we can expect future cities to change dramatically. We discussed a future where people don’t own a car, instead we all share cars that we order on a smartphone. The car picks us up and takes us to our destination and then the car makes another trip to take someone else. Cars will continuously do this, 24/7. They will be electric and also use solar power. The energy they don’t use during the day will be pumped back into the energy grid for people to use in their homes. We will not need to worry about parking spaces and multi-storey car parks, so this space can be used for other buildings and spaces.

Here are two great articles about Uber’s self-driving cars.

Uber’s First Self-Driving Fleet Arrives in Pittsburgh This Month

Here’s What It’s Like to Ride In Uber’s Self-Driving Car


If people won’t own a car in the future this will impact our social lives. Cars are social spaces where people meet, talk, spend lots of time together. Young people traditionally use a car to escape from their parents with a girlfriend or boyfriend, where will young people go to escape in future?

Self-driving cars will eradicate drink-driving, a huge positive. Will an effect of this mean people will drink more when they go out as they know they won’t have to drive home?

Employment will change dramatically. Taxi drivers, delivery drives and more professions will be redundant. Different jobs will be needed. Will driving become more of a hobby and a more popular sport? Driving might be an activity that people do on the weekend for fun!

How will the transition happen? It will be messy and take time. We thought that car insurance may become astronomical to put people off from owning a car so they have to use this new self-driving fleet.

Our attitudes to cars will also change. We won’t care about cars because we won’t own them. We won’t care what type of car picks us up, just like public transport we don’t care what type of bus or train we take, it will be the same with the car.

Driverless cars will also affect the economy. People won’t get speeding fines, parking tickets, pay for parking, councils will lose lots of money, how will they make money in new ways?

Some disadvantages could be the threat of somebody hacking your car with intent to do harm to you. We also voiced concerns of privacy in that via the app people will be able to see where you have been, where you are going, how much data about your trips will be collected and made available? We talked again about the generational gap, people under 55 probably won’t care but people over 55 will not be happy about sharing more personal data.

We talked about cars would make decisions when faced with a choice about who or what they need to crash in to. We talked about the phrase ‘Garbage in, garbage out’, how the algorithms that control this technology will need to work perfectly to prevent crashes. We know that computer systems have a lack of understanding of the world. As humans we are able to understand our environment so much better and we use our experiences to help us build maps and connections of the world. For example, we know if we’re in the wrong place, we have a sense if we get lost or stuck somewhere. Cars follow algorithms and GPS data, if that data is wrong they do not know this, but we do out of instinct.

Now we moved on to the week 9 reading about drone technology.

“UAVs, UASs, RPVs – unmanned air vehicles, unmanned aircraft systems, remotely piloted vehicles – are invading the skies. Everyone calls them drones, ignoring the best efforts of political-correctness enforcers to call them something else. They are the wave of the future in global aviation.”

Commercial drones are now widespread. There are rules and regulations around the use of drones, but we questioned who enforces these rules? Here is a breakdown of the rules for flying drones for fun and recreation in Australia from the Civil Aviation Authority (CVA).

More information can be found here.

We looked at the main uses for drones, these include:

  • Aerial imagery
  • Inspections
  • Survey
  • Real estate
  • Movie and TV
  • Mapping
  • Agriculture

I made the point that drones are now being used in education to teach kids to learn coding.


There is some public opposition to the use of civilian drone use. People are concerned about the uses of some drones, such as military use and surveillance. Privacy is an issue with concerns over drones spying on private property and capturing images using onboard cameras. Someone asked if we owned the airspace above our house for example? Someone mentioned the movie ‘Eye in the Sky’ starring Helen Mirren. This film sees drone technology being used to spy on terrorists and potentially to strike a terrorist plot with missiles from a drone. If there is a chance that innocent people would be killed should missiles be launched to kill one or two terrorists? It looks like an interesting movie about the morals of killing with drone technology.


Another use of drones, and in this case a swarm of drones, is this example from Vivid Sydney. A choreographed light and music entertainment experience held in 2016 in Sydney harbour and organised by Intel. Have a look, its amazing!

Next week is week 10 and the final class.

Philosophy of Technology: Human vs Machine – Week 8

This session featured two main topics: medicine and crime.

We started by reading an article titled ‘Human Enhancement and Personal Identity’ by Philip Brey. The article talks about the implications of human enhancement for personal identity and assesses the social and ethical consequences of these changes. Human enhancement is an emerging field with medicine with the aim of overcoming human limitations of human cognitive and physical abilities. It is thought the advancement of this type of medical technology could have a wide range of possibilities, including enhancements related to strength, vision, intelligence and personality.

“The possibility of enhancement requires a rethinking of the aims of medicine. The primary aim of medicine has always been the treatment of illness and disability. That is, medicine has traditionally been therapeutic: it has been concerned with restoring impaired human functions to a state of normality or health. Human enhancement aims to bring improvements to the human condition that move beyond a state of mere health.”

Philip Brey

We went on to discuss what the medical ‘norm’ actually meant. Everyone develops differently and what is normal for one person will not be normal for someone else. It can depend on your age and what country you live in. We have vastly different health at aged 20 compared to aged 60 for example. We have also raised the bar of what is the norm over the years, we have eradicated some illnesses and improved health care vastly thanks to new technology, research and science. We talked about how much unhappiness should you put up with before you are allowed treatment. It is difficult because only you know how unhappy you feel, no one else can truly know. Are you meant to think back to when you were most happy in your life and set this as the benchmark, and if you go below this level then you should get treatment? We talked about drugs and alcohol as a ‘cure’ for unhappiness, but these are flawed treatments as everyone knows. If we were able to eradicate unhappiness life would be great and we would still have the ability to strive and reach goals in life. Some people disagreed with this concept and Kerry said she “was not going to put in in the water.”

We went on to discuss how athletes use performance enhancing drugs all the time and how the idea of an ‘enhanced’ olympics has been suggested before. But this is just encouraging drug use which is unsafe. We talked about how if everyone was a fast runner what would be the point in competition, or if everyone was brilliant at playing the violin then would we go to concerts anymore.

In terms of enhancing personality could we eradicate shyness in people? Surely this would be a good thing as people don’t really like being shy, they would rather not have that in their personality.


We then moved on to our ‘Crime’ sheet and started discussing issues relating to technology and crime. We began with an article that claimed altruism is amplified online, people are more giving and friendly, but Kerry noted this was probably due to an age reason and this relates more to people 50 years and under. Younger people are also more likely to disclose information, what does one more matter. We discussed internet addiction and if it was actually classified as a real addiction according to the DSM-5. This is the Diagnostic and Statistical Manual of Mental Disorders. According to a 2014 article:

Screen Shot 2017-03-17 at 9.51.04 pm.png

We read an article about internet addiction and it is affecting people all over the world. We heard of people dying due to gaming marathons over 24 or 36 hour sessions and some countries in Asia creating internet addiction treatment centres as the problem is getting worse and worse. South Korea even implemented ‘shut down’ laws forcing teens to abandon their screens between the hours of midnight and 6am, although how they policed that we are not sure of.

We then discussed the Dark Web, a part of the Internet used by criminals to by and sell drugs, pornography, and other criminal activities. It is a professional outfit, sellers offer special deals, coupons and money back guarantees. It is a dangerous place to be though and attempting deals can be tricky and fraught with danger. It is designed to look authentic so it doesn’t feel like you are committing a crime.

BBC – What is the dark web 


We finished the session by talking about 3D printing and in particular the ability to 3D print a gun. This is possible if you own a 3D printer and can download an .stl file of a firearm. In 2014 a Japanese man was arrested for making 3D printed guns. Should this technology be allowed? We talked about making printers that were incapable of printing a gun or not allowing algorithms that can design guns.

There are two sessions left. Next week we finish crime and move on to driverless cars.


Brey, P. (2008). ‘Human Enhancement and Personal Identity’, Ed. Berg Olsen, J., Selinger, E., Riis, S., New Waves in Philosophy of Technology. New Waves in Philosophy Series, New York: Palgrave Macmillan, 169-185.


Naughton, J 2016, The Cyber Effect by Mary Aiken – review, The Guardian, accessed 17 March 2017, <>.

3D printed firearms 2016, Wikipedia, accessed 17 March 2017, <>.

Philosophy of Technology: Human vs Machine – Week 7

In our session this week we finished our discussion of Machine Ethics and began discussing Medical issues and technology.

We started the session by talking about intelligent machines that could be capable of thinking of their own rules. We asked who would be to blame when something goes wrong? The programmer did not code in the instructions that caused the problem, the computer took the action due to complex algorithms and artificial intelligence capabilities. But you can’t sue a machine or punish one, they don’t care.

Kerry recommended a website to us called Moral Machine. This resource gives the user a series of scenarios based on what would happen if the brakes on a driverless failed and the car was to crash into people crossing a road. You have a moral choice to make for each scenario, do you decide to crash into group 1 or group 2, and each group has different characteristics based on the people in the group. It as an interesting dilemma and at the end of the test you are given a breakdown of the results to see the types of people you favour over others. In other words, the test will tell you what types of people you value more over others.

Screen Shot 2017-03-09 at 9.12.07 pm.png

One of the scenarios included just cats as passengers in one car which seemed a little bit far-fetched! Although with driverless cars now current technology I suppose seeing a car with just animal passengers is now possible.

Screen Shot 2017-03-09 at 9.16.02 pm.png

We discussed Immanuel Kant and how he believed certain types of actions were absolutely prohibited, even if the consequences would bring about more happiness. He said that before you can act you have to ask two questions:

  1. Can I rationally will that everyone act as I propose to act? If the answer is no then you must not act.
  2. Does my action respect the goals of human beings rather than merely using them for my own purposes. If the answer is no then you should not perform the action.

So in this case we decided that Kant would take neither course of action, so in this case Kant was not particularly useful. Kerry said that Kant would not even get in the car in the first place and you may as well just stay in bed! Kant acts without emotion. He says your brain is a logical, rational machine. If you act with emotion then you act without morality. Is it possible to leave emotion out of your decision making process? Sometimes lying is a good thing, sometimes we need to lie, but Kant says lying is never good.

We talked about how machines should be designed in the favour of humans. An example is being overpaid instead of underpaid. We used the example of an ATM machine and that they are coded to take money back if it is not claimed within a few minutes so if you forget to make the money then no one else will take it.

We then talked about Maslow’s Hierarchy of Needs. We discussed how ICT has helped to secure some of our basic psychological needs and opportunities to address some of our higher needs. ICT helps us to communicate quickly, freely through a variety of methods. We can easily and quickly share valuable moments and memories also using a variety of different media and documents.


We also talked about the positive aspects of computer gaming. Gaming helps build many positive characteristics including:

  • Self-knowledge
  • Friendship
  • Empathy
  • Engaging in shared activity
  • Sharing intimacy

It is true that many video games include many questionable ideas and actions and some people are increasingly worried about the violent nature of video games. However we discussed the idea of catharsis and that if you play a violent video game does this then allow people to release certain violent tendencies in a virtual world rather than in the real world.

We then moved onto a new sheet and on the topic of medicine and technology. Some of the topics we discussed were gene editing, diabetes and alzheimers. An interesting article we discussed can be found here.

Screen Shot 2017-03-10 at 6.07.19 pm.png

The article talks about gene editing procedures that allow people to prevent people from passing on serious medical conditions to their children. The report says that clinical trials could start soon. The process involves stopping a disorder by rewriting faulty DNA to make it healthy. It is amazing that this technology exists and that scientists will in the future be able to prevent serious illness, this of course is a great idea. However, we discussed that this technology is so new that we don’t know what the consequences will be of manipulating genes. We don’t know what the effects will be if we eradicate one disease will it cause another or make other diseases more prevalent. We talked about perfection is not perfect, it can have flaws, and these flaws can be advantageous. We continued by saying that diversity is good, there is a reason for it. Mutations can be good, we evolved through mutations as it was to our advantage.

Another interesting week of this course and 3 weeks left to go.

Philosophy of Technology: Human vs Machine – Week 6

This week’s session focused on AI and Machine Ethics.

Kerry began the session by talking about a recent article and an interview with Elon Musk on Lateline talking about humans merging with machines. You can access the article in question by clicking here.

Screen Shot 2017-03-03 at 7.16.10 pm.png

This idea is not new and has been around for over 50 years. Progress is very slow due to the challenges of incorporating hardware with organic systems. The article is an interesting read and we discussed the possibility of an AI deep learning system becoming so advanced that it might decide that humans are a bad idea for the survival of the planet. Kerry likened this to the fate of the Rapa Nui people who used up all of the resources of Easter Island and as people began to starve war broke out among the tribes.

Screen Shot 2017-03-03 at 7.29.12 pm.png

We went on to talk about the Turing Test and the two imitation games devised by Alan Turing. The first test is to tell the difference between a man and a woman, the second test is between a man and a computer. The point of the test is can you be tricked into telling the difference. It tries to answer the question is the machine behind the curtain actually thinking? And is its thinking human enough to fool you? We know that computers only display a small amount of human behaviour, can they trick you into thinking that they have completely human emotions and thoughts.


Loebner Prize Gold Medal – a prize for an artificial intelligence contest to implement the Turing Test. First prize is $100,000 and this gold medal for the first computer whose responses were indistinguishable from a human’s.

Kerry made the link to Rene Descartes and how we cannot trust our senses, “It is necessary that at least once in your life you doubt, as far as possible, all things” Rene Descartes.


Rene Descartes


Parallel Lines Optical Illusion

We moved off AI and started to talk about machine ethics. We agreed that machines are designed and built in a way so that they are safe to operate and won’t harm humans, remember Asimov’s three laws of robotics. Even down to your toaster, it was designed in an ethical manner. We talked about how most things, especially services, are only available online and this causes problems for people that do not have access to technology and the online world. We heard of examples of older people that do not have a mobile phone or the internet and so cannot access some services. It seems that today if you don’t have a mobile phone or internet then you’re stuffed! The fact that services are online only puts some people at a disadvantage, this raises ethical questions. You have to think is there an alternative way of doing things.

Other questions arose about social interactions. We can go through much of our daily lives without the need for any social interaction at all. All of the things we require we can get by accessing technology, we don’t even need to speak to anybody at Woolies as the checkouts are now computerised. Online social interaction is highly popular, such as with Facebook and Twitter, but these websites easily allow bullying to take place. Other issues around anonymity and cyber crime are also relevant here.

We then spoke about the privacy issue, basically should we have privacy or not? We decided it was a generational issue. Older people firmly believe in privacy of information and not handing over personal information. Younger people do not seem to have an issue with it, they think it is worth the trade off to live in a digitally advanced age. We spoke about a recent decision by President Trump to allow his staff’s phones to be checked to make sure they were not leaking information to the media. So even the US Government does not think people should have privacy. We decided that this was a better option than being tortured for information in the days before digital communication and mobile phones. We also decided that Governments need some privacy. Absolute transparency can be very dangerous as information can be mistreated, misrepresented and misused.

Screen Shot 2017-03-03 at 9.32.53 pm.png

Screen Shot 2017-03-03 at 9.38.01 pm.png

We finished this session talking about driverless cars and how they will decide on who to protect in the event of an accident. If the choice of greater harm is to the passenger or a person on the street, who is the car going to choose to protect in the event of a crash. Kerry had the idea of when you get into a driverless car before the journey starts you have to take a test to work out how important you are. The more important or valuable you are to society the more the car will protect you in the event of a crash. So if you’re Albert Einstein or a heart surgeon you will be fine! We liked the idea of this ranking system but thought it would be easy to cheat on the test so not sure how it would work in reality.

Just two more session to go.