top of page
Writer's pictureIrma Brenz

The Rise of the Machines



The title “Rise of the Machines” seems to take us to the realm of fantasy and science fiction. We’ve all seen the plot: A technological advance is deployed in the near future, and whether that is androids, cyborgs or an artificial consciousness, humans are in trouble. Robots are ruthless: Hal 9000 in 2001 a Space Odyssey, Skynet in Terminator, the Red Queen in Resident Evil or VIKY in I, Robot. Cinema tells us that artificial intelligence will eventually escape our control and betray us.


What if I tell you that the AI has turned against us already? I am not speculating or predicting, I am talking about real-life events, all related to the algorithms that run our lives. Is not like the dystopian future of the films: It’s worse.


Neural networks and the three laws of robotics


Para entender lo que pasa con la inteligencia artificial debemos antes aprender un poco como funciona. Es demasiado fácil demonizar las IAs sin saber como funcionan, pero antes de juzgar es de rigor informarse. Las IAs (inteligencias artificiales) son básicamente algoritmos. Un algoritmo es un modelo matemático en el que se dan una serie de instrucciones para llegar a un determinado objetivo. Hay muchísimos tipos de algoritmos; algunos los estudiábamos en el colegio y otros determinan cuántos seguidores puedes conseguir en tu cuenta de Instagram.


To understand what is happening with artificial intelligence (AI) we need to first learn how it works. AIs are mostly algorithms. An algorithm is a mathematical model which receives instructions on how to achieve a given goal. There are lots of different types of algorithms: we studied some of them at school and others dictate the reach of our Instagram followers.


When it comes to thinking machines, neural networks are the way to go, and mathematician David Sumpter explains their inner workings. Neural networks are a type of algorithms inspired by how the brain works, with neurons sending and receiving electric pulses. As Sumpter says, these algorithms are mere caricatures of the actual brain, but they have something that other algorithms don’t have: neural networks learn.


If we wanted to create an algorithm to, for example, write a book; we should design a model with the range of words we’ll be using, along with grammar structures and a set of rules (for example, randomising certain words to simulate creativity). After providing our system with all the necessary information, our model should know what it can and can’t do.


Neural networks don’t work like that. Instead of providing the model with information about grammar structures, you give it only the basics (nouns, verbs,) and your first output will be nonsense. That’s when the training starts. Every time the network produces desired output, those links are reinforced. That is a bottom-up approach in which the machine is learning the grammar rules with reinforcing. Sumpter explains that, after 20,000 sequences, his algorithm could say “cat bites dog”. Eureka.


Neural networks are the kind of AI that is very attractive to Facebook and Google. If you want to know what the AI future holds, you just need to check out the field of expertise of whoever the titans of tech are hiring.

neural network algorithm Sumpter david
Example of a neural network before training. Cat bites chases

Hypothetically, it would make some kind of sense that an AI would decide to optimise us humans by controlling us (I, Robot) or causing our extinction (Avengers: Age of Ultron). To train a network to do something requires fine-tuning. Imprecise or poorly formulated goals, fed to the machine, could be a probable cause of the AI VIKY trying to kill Will Smith in I, Robot.


The film is based on Isaac Asimov’s writings; set in a future where assistant robots are commonplace and ruled by an advanced AI called VIKI. All robots abide by the Three Laws of Robotics, which forbid them to injure a human being or, through inaction, allow a human being to come to harm. A loose interpretation of these laws led VIKI to conclude that humans are self-destructive to the point of extinction. Under this logic, controlling human beings would stop them from hurting themselves, and that’s what VIKI decides to do. (The film is from 2004, so this does not count as a spoiler).

VIKI I, robot
How come most evil AIs are women?

This example is a possibility. How remote that possibility is, is up to Computer Scientists to determine. If you ask me, I don’t think the rise of machines will look like that at all. To me, is way scarier. And it’s already happening. If you didn’t know much about algorithms, fasten your seatbelt because I am taking you for a ride.


Uni life and instrumental logic


As data scientist Cathy O’ Neil says, mathematical models favour efficiency over fairness because fairness is too hard to quantify. How can we decide what is fair? In Weapons of Math Destruction, O’Neil explains the delicate balance between fairness and efficiency.


In 1983, the newsmagazine U.S. News launched the first university ranking, to help the students make their first big decision in their adult life: which university to attend. To create a model like that first creators need to decide which factors to take into account to achieve educational excellence. There are too many factors that contribute to a good university experience, many of them not related to academic performance. Because they couldn’t quantify success, they chose variables that seemed to correlate to success. They also left out a factor decisive for many students: the tuition fees.


The ranking was an immediate success, becoming a referent in the country. That’s when the problems started: Universities would go above and beyond to get a good score. Some would pay for the students to retake the SAT exams, hoping that they’d get better grades with a second try. Others would tamper with numbers in practically every aspect: acceptance, graduation rates and student grades. And because tuition fees didn’t count towards the ranking, they could raise tuitions and use the money for marketing. The cost of higher education rose by more than 500 per cent between 1985 and 2013.


It seemed the universities’ goal was no longer to educate but rather to score the highest in the rankings (and making lots of money from the new students flooding in). Not wanting to partake in the circus wasn’t an option either. If an institution ended up at the bottom of the barrel, it created a feedback loop that made it less likely to get back on top.


What scares me the most about gaming the model is how it damages education. Universities can choose to water down the subjects that students struggle with, such as science and languages. This way, the grades and percentage of graduates stay up, making the university look good. The downside is, of course, that future generations of technologist will only have a mild idea of what they’re doing. And won’t somebody please think of the children?

college graduate i did it
They did it because they made it easier to do it

This is just a single piece of software that changed the behaviour of universities all over the U.S., and its effects are already hindering our progress. There are many more examples with a similar or even wider reach. Scary, huh?


Do you know what upsets me the most? The philosophy of it. In The Question Concerning Technology, Heidegger said that, by a misinterpretation on the human part, we see technology as a tool to unlock and exploit the energy concealed in nature. Because we see technology as mere means to an end, we see the world as a standing-reserve, like nature is a great gas station where we stop only to refuel. The danger here is that we do not only see the world as ours to exploit and store: we also become mere means, a tool. This is a very general, simplified version. If you never read Heidegger, I would recommend some illegal substances before you start on Being and Time.


What I want to highlight from Heidegger’s description of technology is the concept of instrumental logic. We use instrumental logic when we reduce the world to what we can gain from it; to means to an end, to a gas station. This logic also happens to be very similar to the logic of the mathematical models in which we base our AIs. They are goal-oriented, their variables are quantifiable and they focus on efficiency. This not how humans think. When you go to university, for example, you form new bonds, live new experiences and (hopefully) learn to be independent. Perhaps those factors are not quantifiable, but they still count towards our experience of a college degree.


Hiring, firing, applying for insurance or buying a house are increasingly controlled by algorithms with dubious parameters and little regard from the human factor. It frightens me to think that we’ll live forever governed by instrumental logic, but it seems likely. Even though we are so much more than our machines, we started behaving like them. And now, like American universities, we can’t afford not to play the game.


Shortshighted


I just asked Alexa to tell me a joke. She did. I told her I didn’t like it and she said she’d have to check her circuits. I then asked her to repeat the joke, and instead of doing so, she told me a new one. She doesn’t remember what we were talking about. It happens when I need her to give me information relating to the one she just gave me. She can’t remember. Clearly, she only listens to me for surveillance purposes.


It’s not Alexa’s fault, even the more advanced chatbots fail at that. Mitsuku, deemed the world’s best conversational chatbot, calls you out if you ask her the same question several times, but when you laugh at her jokes she asks you what you’re laughing about.

You can also watch Mitsuku on her Twitch channel

Creativity is not a strong point for AIs either. As much as they try, we are still far off from authoring novels. They can randomise some data and interiorise grammar, so they can mix words from Anna Karenina convincingly, and come up with interesting sentences, but they are still quite limited. As David Sumpter says: “While computers are good at collecting large numbers of statistical measures, humans are very good at discerning the underlying reasons for these measures.” Our algorithms can easily identify a pattern of behaviour with sufficient data. They can’t, however, understand why we do what we do. That’s why it is so dangerous to be governed only by logic imposed by their numbers. And yet, that is what we’re doing.


Accountability


Our algorithms do have amazing potential, but as they say (as Peter Parker’s uncle says) with great power comes great responsibility. And currently, not many people are holding themselves accountable. When I was in my first year of university, I pondered how nice it would be to work as an Ethics Advisor for a large corporation. Today, I wonder how I was so naïve, thinking that a private company would use such an advisor for anything more than putting up a clean front. The potential for profit translates into an immediate deployment of technology with quite limited research on its broader impact. If consequences come, they’ll deal with them upon government requirement.


The reach of tech is so wide, yet audits are few and far between. We are not holding our technologies (and the companies that create them) accountable, which is when technological racism arises. And then, the risks are enormous. In the world of big data and unchecked technologies, bad things happen and inequalities multiply. Poverty becomes a black hole from which there’s no return.

And it is our doing: this rise of the machines is not like the ones that Sci-Fi promised. As human beings, we have the capacity of seeing the bigger picture. Instead, as time goes by, it worries me that we behave more like a machine, dominated by the instrumental logic that turns the world into our gas station. The AI revolution will not be technological. It is political.


References



Images


David Sumpter, Outnumbered, Brian Snyder for NBC news

0 comments

Recent Posts

See All

Comments


bottom of page