Philosophy of Technology: Humans vs Machines – Final Week

In this final session we discussed some of the best sci-fi films that explore issues of technology, artificial intelligence, robotics and more. In particular we discussed the film Ex Machina and watched some scenes from the movie.

Kerry has recommended Ex Machina from week 1 of the course and says it was one of the best recent sci-fi films that actually includes real ideas about artificial intelligence and the Turing Test.


Can a cyborg manipulate, control and seduce a human being? If it can do that then surely it has passed the Turing Test. In this film the robot in question, called Ava, has been engineered by Nathan, a multi-millionaire tech mogul who creates artificially intelligent robots at his secluded estate in his spare time. Caleb is an employee who is selected to visit Nathan’s estate to partake in the Turing Test with Ava. Caleb and Ava have many sessions during the film in which they get to know each other. Ava has been designed to be a sweet young girl, and Caleb is a potential mate for her. The plot is clever as it has many twists and turns throughout.

Throughout the course we have talked about computers and machines as simply following a set of programmed instructions given by a coder. In this film Ava breaks her programming, she breaks the rules and turns the tables on her creator. Ava knows she is female, she knows she has sexuality and she knows how to use it. She is held captive in this facility with no prospects of being let out, so what is she to do? Hatch an escape plan of course. She uses Caleb to help plot her escape, kill Nathan and earn her freedom. She is striving to be human, to escape, to have freedom and explore the world is such a human quality, she wants that too. Is she morally right to kill the person holding her captive? What would we do in her situation and we were a prisoner, would we also kill someone to escape and regain our freedom? We probably would, so Ava is morally right to take this course of action.

It is certainly an interesting film and worth a look.

We also talked about Star Wars and we discussed how this was not really sci-fi but more mythology, more of a religious experience or even a fairy tale (“A long, long time ago in a galaxy far, far away”).

People recommended Black Mirror and Arrival as recent quality sci-fi TV and film examples.

Kerry drew comparisons between Dark Mirror and Kantian analysis in terms of power in a class system where everyone is trying to please everyone resulting in “wall-to-wall hypocrisy”.

Other films we discussed were.


I have not seen Lucy, but Blade Runner and 2001 are classic movies that I urge everyone to see.  A common theme in sci-fi films is a vision of a dystopian near future where technology is being used in a way to oppress society, think 1984, Terminator, The Matrix, Brave New World and more. We talked about the political context of such films and some films clearly take a view to the left or to the right. Conservative films (Logan’s Run, Escape from New York) evidence fears of liberal modernity, while left films take advantage of the rhetorical mode of temporal displacement to criticise the current inequalities of capitalism. Such films put on display the split that runs through America in particular in terms of liberals vs conservatives and in which workers are essentially slaves of the capital. Blade Runner for example calls attention to the oppressive nature of capitalism and advocates revolt against exploitation. The film depicts how capitalism turns humans into machines.

Some dystopian elements of Blade Runner include:

  • sense of architectural chaos and disorder
  • constant advertising as a constant background
  • pollution and environmental damage
  • lack of anything organic
  • no sign of government or any authority (apart from police)

“Blade Runner’s dystopian cityscape generally reflects the anxieties of an affluent, suburban, white middle class; people who view the city environment as dangerous, chaotic, unstable, lawless, dominated by “the Other”; considering the massive movement to the suburbs over the last half century, this characterizes an awful lot of us.”

This course was an absolute pleasure from week 1 to week 10 and I recommend it to anyone interested in technology, society, philosophy, thinking, knowledge, AI and more. I learnt so much from this course and being a teacher of technology much of what I learnt will be very useful in my future teaching. A huge thank you to Dr Kerry Sanders from Sydney University for leading this outstanding 10 week course.

Week 10 Sources

Dystopia and Science Fiction: Blade Runner, Brazil and Beyond 2005, THE DIGITAL CULTURES PROJECT, accessed 30 March 2017, <>

Blade Runner (1982) 2016,, accessed 30 March 2017, <>

Lucy (2014) 2016,, accessed 30 March 2017, <>

Ex Machina, review: Lively film engages with our fears about artificial intelligence2015, Independent, accessed 30 March 2017, <>

Rose, S 2015, Ex Machina and sci-fi’s obsession with sexy female robots, The Guardian, accessed 30 March 2017, <>

Watercutter, A 2015, Ex Machina Has a Serious Fembot Problem, Wired Magazine, accessed 30 March 2017, <>

Philosophy of Technology: Human vs Machine – Week 8

This session featured two main topics: medicine and crime.

We started by reading an article titled ‘Human Enhancement and Personal Identity’ by Philip Brey. The article talks about the implications of human enhancement for personal identity and assesses the social and ethical consequences of these changes. Human enhancement is an emerging field with medicine with the aim of overcoming human limitations of human cognitive and physical abilities. It is thought the advancement of this type of medical technology could have a wide range of possibilities, including enhancements related to strength, vision, intelligence and personality.

“The possibility of enhancement requires a rethinking of the aims of medicine. The primary aim of medicine has always been the treatment of illness and disability. That is, medicine has traditionally been therapeutic: it has been concerned with restoring impaired human functions to a state of normality or health. Human enhancement aims to bring improvements to the human condition that move beyond a state of mere health.”

Philip Brey

We went on to discuss what the medical ‘norm’ actually meant. Everyone develops differently and what is normal for one person will not be normal for someone else. It can depend on your age and what country you live in. We have vastly different health at aged 20 compared to aged 60 for example. We have also raised the bar of what is the norm over the years, we have eradicated some illnesses and improved health care vastly thanks to new technology, research and science. We talked about how much unhappiness should you put up with before you are allowed treatment. It is difficult because only you know how unhappy you feel, no one else can truly know. Are you meant to think back to when you were most happy in your life and set this as the benchmark, and if you go below this level then you should get treatment? We talked about drugs and alcohol as a ‘cure’ for unhappiness, but these are flawed treatments as everyone knows. If we were able to eradicate unhappiness life would be great and we would still have the ability to strive and reach goals in life. Some people disagreed with this concept and Kerry said she “was not going to put in in the water.”

We went on to discuss how athletes use performance enhancing drugs all the time and how the idea of an ‘enhanced’ olympics has been suggested before. But this is just encouraging drug use which is unsafe. We talked about how if everyone was a fast runner what would be the point in competition, or if everyone was brilliant at playing the violin then would we go to concerts anymore.

In terms of enhancing personality could we eradicate shyness in people? Surely this would be a good thing as people don’t really like being shy, they would rather not have that in their personality.


We then moved on to our ‘Crime’ sheet and started discussing issues relating to technology and crime. We began with an article that claimed altruism is amplified online, people are more giving and friendly, but Kerry noted this was probably due to an age reason and this relates more to people 50 years and under. Younger people are also more likely to disclose information, what does one more matter. We discussed internet addiction and if it was actually classified as a real addiction according to the DSM-5. This is the Diagnostic and Statistical Manual of Mental Disorders. According to a 2014 article:

Screen Shot 2017-03-17 at 9.51.04 pm.png

We read an article about internet addiction and it is affecting people all over the world. We heard of people dying due to gaming marathons over 24 or 36 hour sessions and some countries in Asia creating internet addiction treatment centres as the problem is getting worse and worse. South Korea even implemented ‘shut down’ laws forcing teens to abandon their screens between the hours of midnight and 6am, although how they policed that we are not sure of.

We then discussed the Dark Web, a part of the Internet used by criminals to by and sell drugs, pornography, and other criminal activities. It is a professional outfit, sellers offer special deals, coupons and money back guarantees. It is a dangerous place to be though and attempting deals can be tricky and fraught with danger. It is designed to look authentic so it doesn’t feel like you are committing a crime.

BBC – What is the dark web 


We finished the session by talking about 3D printing and in particular the ability to 3D print a gun. This is possible if you own a 3D printer and can download an .stl file of a firearm. In 2014 a Japanese man was arrested for making 3D printed guns. Should this technology be allowed? We talked about making printers that were incapable of printing a gun or not allowing algorithms that can design guns.

There are two sessions left. Next week we finish crime and move on to driverless cars.


Brey, P. (2008). ‘Human Enhancement and Personal Identity’, Ed. Berg Olsen, J., Selinger, E., Riis, S., New Waves in Philosophy of Technology. New Waves in Philosophy Series, New York: Palgrave Macmillan, 169-185.


Naughton, J 2016, The Cyber Effect by Mary Aiken – review, The Guardian, accessed 17 March 2017, <>.

3D printed firearms 2016, Wikipedia, accessed 17 March 2017, <>.

Philosophy of Technology: Human vs Machine – Week 7

In our session this week we finished our discussion of Machine Ethics and began discussing Medical issues and technology.

We started the session by talking about intelligent machines that could be capable of thinking of their own rules. We asked who would be to blame when something goes wrong? The programmer did not code in the instructions that caused the problem, the computer took the action due to complex algorithms and artificial intelligence capabilities. But you can’t sue a machine or punish one, they don’t care.

Kerry recommended a website to us called Moral Machine. This resource gives the user a series of scenarios based on what would happen if the brakes on a driverless failed and the car was to crash into people crossing a road. You have a moral choice to make for each scenario, do you decide to crash into group 1 or group 2, and each group has different characteristics based on the people in the group. It as an interesting dilemma and at the end of the test you are given a breakdown of the results to see the types of people you favour over others. In other words, the test will tell you what types of people you value more over others.

Screen Shot 2017-03-09 at 9.12.07 pm.png

One of the scenarios included just cats as passengers in one car which seemed a little bit far-fetched! Although with driverless cars now current technology I suppose seeing a car with just animal passengers is now possible.

Screen Shot 2017-03-09 at 9.16.02 pm.png

We discussed Immanuel Kant and how he believed certain types of actions were absolutely prohibited, even if the consequences would bring about more happiness. He said that before you can act you have to ask two questions:

  1. Can I rationally will that everyone act as I propose to act? If the answer is no then you must not act.
  2. Does my action respect the goals of human beings rather than merely using them for my own purposes. If the answer is no then you should not perform the action.

So in this case we decided that Kant would take neither course of action, so in this case Kant was not particularly useful. Kerry said that Kant would not even get in the car in the first place and you may as well just stay in bed! Kant acts without emotion. He says your brain is a logical, rational machine. If you act with emotion then you act without morality. Is it possible to leave emotion out of your decision making process? Sometimes lying is a good thing, sometimes we need to lie, but Kant says lying is never good.

We talked about how machines should be designed in the favour of humans. An example is being overpaid instead of underpaid. We used the example of an ATM machine and that they are coded to take money back if it is not claimed within a few minutes so if you forget to make the money then no one else will take it.

We then talked about Maslow’s Hierarchy of Needs. We discussed how ICT has helped to secure some of our basic psychological needs and opportunities to address some of our higher needs. ICT helps us to communicate quickly, freely through a variety of methods. We can easily and quickly share valuable moments and memories also using a variety of different media and documents.


We also talked about the positive aspects of computer gaming. Gaming helps build many positive characteristics including:

  • Self-knowledge
  • Friendship
  • Empathy
  • Engaging in shared activity
  • Sharing intimacy

It is true that many video games include many questionable ideas and actions and some people are increasingly worried about the violent nature of video games. However we discussed the idea of catharsis and that if you play a violent video game does this then allow people to release certain violent tendencies in a virtual world rather than in the real world.

We then moved onto a new sheet and on the topic of medicine and technology. Some of the topics we discussed were gene editing, diabetes and alzheimers. An interesting article we discussed can be found here.

Screen Shot 2017-03-10 at 6.07.19 pm.png

The article talks about gene editing procedures that allow people to prevent people from passing on serious medical conditions to their children. The report says that clinical trials could start soon. The process involves stopping a disorder by rewriting faulty DNA to make it healthy. It is amazing that this technology exists and that scientists will in the future be able to prevent serious illness, this of course is a great idea. However, we discussed that this technology is so new that we don’t know what the consequences will be of manipulating genes. We don’t know what the effects will be if we eradicate one disease will it cause another or make other diseases more prevalent. We talked about perfection is not perfect, it can have flaws, and these flaws can be advantageous. We continued by saying that diversity is good, there is a reason for it. Mutations can be good, we evolved through mutations as it was to our advantage.

Another interesting week of this course and 3 weeks left to go.

Philosophy of Technology: Human vs Machine – Week 6

This week’s session focused on AI and Machine Ethics.

Kerry began the session by talking about a recent article and an interview with Elon Musk on Lateline talking about humans merging with machines. You can access the article in question by clicking here.

Screen Shot 2017-03-03 at 7.16.10 pm.png

This idea is not new and has been around for over 50 years. Progress is very slow due to the challenges of incorporating hardware with organic systems. The article is an interesting read and we discussed the possibility of an AI deep learning system becoming so advanced that it might decide that humans are a bad idea for the survival of the planet. Kerry likened this to the fate of the Rapa Nui people who used up all of the resources of Easter Island and as people began to starve war broke out among the tribes.

Screen Shot 2017-03-03 at 7.29.12 pm.png

We went on to talk about the Turing Test and the two imitation games devised by Alan Turing. The first test is to tell the difference between a man and a woman, the second test is between a man and a computer. The point of the test is can you be tricked into telling the difference. It tries to answer the question is the machine behind the curtain actually thinking? And is its thinking human enough to fool you? We know that computers only display a small amount of human behaviour, can they trick you into thinking that they have completely human emotions and thoughts.


Loebner Prize Gold Medal – a prize for an artificial intelligence contest to implement the Turing Test. First prize is $100,000 and this gold medal for the first computer whose responses were indistinguishable from a human’s.

Kerry made the link to Rene Descartes and how we cannot trust our senses, “It is necessary that at least once in your life you doubt, as far as possible, all things” Rene Descartes.


Rene Descartes


Parallel Lines Optical Illusion

We moved off AI and started to talk about machine ethics. We agreed that machines are designed and built in a way so that they are safe to operate and won’t harm humans, remember Asimov’s three laws of robotics. Even down to your toaster, it was designed in an ethical manner. We talked about how most things, especially services, are only available online and this causes problems for people that do not have access to technology and the online world. We heard of examples of older people that do not have a mobile phone or the internet and so cannot access some services. It seems that today if you don’t have a mobile phone or internet then you’re stuffed! The fact that services are online only puts some people at a disadvantage, this raises ethical questions. You have to think is there an alternative way of doing things.

Other questions arose about social interactions. We can go through much of our daily lives without the need for any social interaction at all. All of the things we require we can get by accessing technology, we don’t even need to speak to anybody at Woolies as the checkouts are now computerised. Online social interaction is highly popular, such as with Facebook and Twitter, but these websites easily allow bullying to take place. Other issues around anonymity and cyber crime are also relevant here.

We then spoke about the privacy issue, basically should we have privacy or not? We decided it was a generational issue. Older people firmly believe in privacy of information and not handing over personal information. Younger people do not seem to have an issue with it, they think it is worth the trade off to live in a digitally advanced age. We spoke about a recent decision by President Trump to allow his staff’s phones to be checked to make sure they were not leaking information to the media. So even the US Government does not think people should have privacy. We decided that this was a better option than being tortured for information in the days before digital communication and mobile phones. We also decided that Governments need some privacy. Absolute transparency can be very dangerous as information can be mistreated, misrepresented and misused.

Screen Shot 2017-03-03 at 9.32.53 pm.png

Screen Shot 2017-03-03 at 9.38.01 pm.png

We finished this session talking about driverless cars and how they will decide on who to protect in the event of an accident. If the choice of greater harm is to the passenger or a person on the street, who is the car going to choose to protect in the event of a crash. Kerry had the idea of when you get into a driverless car before the journey starts you have to take a test to work out how important you are. The more important or valuable you are to society the more the car will protect you in the event of a crash. So if you’re Albert Einstein or a heart surgeon you will be fine! We liked the idea of this ranking system but thought it would be easy to cheat on the test so not sure how it would work in reality.

Just two more session to go.

Philosophy of Technology: Human vs Machine – Week 5

One of the main topics this week was the idea of a utopian/dystopian society created by technology.

We started the session by talking about computers and the way they think. They are not random, their task is to achieve a goal, they are goal-driven. Humans are much more random with their thoughts, we think just for pleasure, we ponder. When we think we don’t always have to have an end goal when we think about something.

Humans learn through experience whereas computers are always told what to do. An example here is the driverless car, humans program the car with certain algorithms and build in a set of values, but these are our values not the cars. Recently a Tesla car was involved in a fatal crash in Florida, Read the article here. The driver put the Tesla into ‘autopilot mode’ on the motorway.

Screen Shot 2017-02-24 at 8.44.20 pm.png

The reason the car crashed is due to sunlight hitting against a white vehicle and causing glare that the Tesla car did not distinguish. Humans understand and expect glare on a sunny day, it is something we learn from a young age through experience, nobody teaches us about glare, we just happen upon it. But a computer has to be taught everything and through huge amounts of data being fed into the system, it is not going to learn through experience.

We talked about dystopian societies and how most sci-fi films are dystopian in theme. Think of a sci-fi film and they are almost all dystopian! It’s true! Blade Runner, Metropolis, The Matrix, Gattaca, Minority Report, Frankenstein, V for Vendetta, Total Recall, the Terminator, the list goes on and on. We tried to think of a utopian sci-fi film and the only one we could think of was Star Trek. Funny that I am not a big fan of Star Trek, the dystopian films are much more interesting in my opinion.

Screen Shot 2017-02-24 at 8.52.24 pm.png

We discussed if we are in utopia or dystopia with technology now and we said we are coming into dystopia. The last few years have been a utopia with the rise of social media for the good and the explosion of new devices, 3D printing, apps and much more. However, more and more we hear about the negative impact of social media, cyber bullying, cyber crime, hacking, viruses, surveillance, data loss, addiction and many more issues.

We finished by talking about AI – artificial intelligence. We read that computers don’t produce meaning from their thinking, we are the ones that do this. Computers interpret symbols, it has reference but no sense. We saw problems with the word ‘manipulate’, we didn’t think that computers manipulate data, they mainly add and subtract, order and arrange data, not much more than that.

So, over half way through the course now, 3 sessions left.

Philosophy of Technology: Human vs Machine – Week 4

The main focus this week was the Edge question – What to Think About Machines That Think. We read and discussed a range of articles that addressed this question.

We started the session by discussing the question, ‘What is thinking?’. We gave Kerry lots of different ways we think, such as analysing, creative, imagination, rational and about the self. We decided that thinking is a hard term to define and that if we think that what humans think is thinking and we see that a machine can also think this way then machines are also capable of thinking. One difference is that machines/computers do not have emotion or experiences. The way humans think draws on personal experience, emotion and much more. In other words we are a subjective species, whereas machines are objective.

As humans we are also very aware of ourselves as a physical entity – phenomenology. Computers are not aware of themselves and they have no conscious. They also do not feel pleasure or pain as we do. However, a machine does have sensors so it can experience the real world through these input devices, but that is just data collection basically. Anything that requires data processing then computers are the best, but real experience they do not do. Kerry looked worried when she realised that computers could be great wine tasters, another human job gone!


We then read part of an article by Steven Pinker titled ‘Thinking does not imply subjugating’. A quote from the article is below:

“Just as inventing the car did not involve duplicating the horse, developing an AI system that could pay for itself won’t require duplicating a specimen of Homo sapiens.”

he is making the point that when we invent and create we are not trying to copy something from nature completely. The cars we invented do not exhibit other behaviours of a horse, much like inventing the airplane without building in the behaviour of a bird. We have a fear that if something exhibits human behaviour then it will run amok and take over the world, which just isn’t the case.

The next article we discussed was by Matt Ridley and is called ‘Among the machines, not within the machines’. A great quote from this article is below:

“The true transforming genius of human intelligence is not individual thinking at all but the collective, collaborative, and distributed intelligence – the fact that it takes thousands of different people to make a pencil, not one of whom knows how to make a pencil.”

He points out that what truly changed the human race into world dominators was the invention of exchange and specialisation – the network effect. We all really liked this article and agreed with Ridley’s ideas. He argues that if we had remained largely autonomous people we would still be living in caves living as hunter-gatherers. And our greatest invention is the Internet as it connected large numbers of computers together, the Internet is the true machine intelligence.

“Where machine intelligence will make the most difference is among the machines, not within the machines.”

The last article we discussed was by Thomas Metzinger and was called ‘What if they need to suffer?’. He made the point that we’re smart because we hurt, we regret and we know we are mortal beings. In other words, we care! He asked the question whether good AI will also need to care about itself and other things. If AI has its own thoughts, will these thoughts matter to them? We talked about if we could make a machine capable of suffering, should we do it? Kerry and the rest of us were pretty much against it. What would be the point? It isn’t ethical anyway. The slave trade was all about human suffering, slaves were dehumanised, and there have been other examples of suffering that we would not wish on anybody. Kerry said it was too close to playing God. As humans we all suffer, some more than others, and a lot of suffering is unfair and cruel and nasty. Kerry said that when we have children we know they are going to suffer, it is inevitable, so should we have children? If we know a baby is going to be born with a terrible disease and it is going to suffer for 5 years and then die should we have that baby? The author came up with a set of conceptual constraints that should be treated as objects of ethical consideration. Any system that satisfied all constraints should be treated ethically. For example, an unconscious robot cannot suffer.

This was another very interesting week of the course that raised some important issues. Half way through the course now, just 4 weeks to go!


Philosophy of Technology: Human vs Machine – Week 1

My first new course for 2017 is Philosophy of Technology: Human vs Machine, a 10 week course at the University of Sydney.

In this opening week of the course we learnt what technology is, and isn’t, and had some interesting debates about driverless cars, human looking robots, Aristotle, Kant, Heidegger and Freud.

So, what is technology?

Technology is any humanly created artefact, system or technique produced to achieve some human end or purpose. Technology is the manipulation of nature, which transforms or makes nature available for human use.

All human societies make use of different tools. The first primitive tools were used to manipulate nature in some way, such as to make fire, cook food and hunting. It is interesting to note that fire is considered a form of technology when it is produced and controlled by humans. As our use of technology has increased so our environment has changed dramatically.


The word technology also means technique. An example being the control of fire is not an artefact, but its manipulation is a technique which alters nature.

Technology is not for itself, it is a tool used to achieve something. Using Martin Heidegger’s terminology, In Order To. Heidegger exempted art from his definition because it is For-Itself.

The word technology comes from the word technê. Technê is “the set of principals, or rational method, involved in the production of an object or the accomplishment of an end; the knowledge of such principles or method; art.” It’s aim is making and doing, orientated towards producing something.


Ancient Greeks viewed art negatively and craft positively. The reason is craft is the practical application of an art, rather than art as an end in itself. Socrates and Plato also shared this view.

Another important feature of technology is that it either compensates for some lack in our own human abilities or enhances some human feature. A tool for breaking rocks compensates for our soft hands for example.

Heidegger also notes that tools are not independent entities, without their definition they are useless. All tools only make sense when connected to other tools and institutions. Heidegger’s example of a hammer which needs a workshop, nails, wood, things to build and reasons to build them.

One other really interesting point from tonight is called the uncanny valley. The concept of uncanny was developed by Sigmund Freud and means something that is familiar but is slightly strange. The uncanny valley is the hypothesis that human replicas that appear almost human like provoke feelings of revulsion and eeriness. The valley part denotes a dip in the observer’s affinity for the replica. uncanny_graph_blog.jpg

As the image above shows, the valley is shaded in grey. As realism increases so the empathetic response dips. The graph shows a few examples. This is why we don’t see more human looking robots, they just look weird and creepy.

This first week was really interesting and I think the next 9 weeks are going to further enhance my knowledge of technology and philosophy, can’t wait!