AI is everywhere; no matter where you go, it’s there.
In your pocket, next to you, next to your bed.
Notice how I made AI sound like a stalker, a danger? It is, but not in the way movies or shows portray it to be. It’s dangerous because we underestimate the human ability to create.
To fully explain what I mean, I would need to break it up into three sections.
- The idea of dependency. (Why do humans want to depend on someone or something?)
- The Uncanny Valley (Do we tend to humanize things when given the chance?)
- Allied Master Computer. (A story of the dangers of AI and us.)
Each of these sections will occasionally have sub-sections. These sub-sections will give example quotes by credible sources to further strengthen my points. Now, without further ado, let me begin.
The idea of dependency.
Let me ask a question first: Who do you depend on to answer a question?
Your parents, a friend, or is it the AI who lurks on your phone waiting?
According to Exploding Topics, there are 16.4 billion searches on Google every day. I created a chart below breaking down 10 searches that people could have asked a human.
This chart proves that we as humans have fallen into an unbreakable cycle.
“Asking the AI to tell me the answer, because it knows way better than I do.”
Did we forget that we were the ones to enter all the data and knowledge that AI knows today? That anything a computer knows, a human somewhere knows that too?
This species has become dependent on something that in most cases isn’t needed.
We rely on it because we think we can do no better, that it’s the most advanced thing of our time.
We’ve built something that functions like a human but has no body, no visual to look at when using it, but what if we did?
The Uncanny Valley
In the 1970’s Masahiro Mori introduced the term “uncanny valley” to describe how robots appear more humanlike; they become more appealing, but only up to a certain point. The valley is defined by people’s negative reaction to certain lifelike robots.
I have taken this image from spectrum.ieee.org. I am also going to quote their definition to better explain this image.
“The uncanny valley graph created by Masahiro Mori: As a robot’s human likeness (horizontal axis) increases, our affinity towards the robot (vertical axis) increases too, but only up to a certain point. For some lifelike robots, our response to them plunges, and they appear repulsive or creepy. That’s the uncanny valley.”
Now that the uncanny valley has been explained to you, I want to give you two choices. Would you rather constantly use Google, which looks like this:
Or, the one we all know and love today?
I’m assuming that you chose the one that bears no resemblance to us, the user. Why do we do this? Because AI in our heads is completely different from us so therefore it shouldn’t resemble us, and when it does, it causes the uncanny valley to occur, leaving us reluctant to use the AI we were once comfortable with.
But why are we uncomfortable using the very thing we created when we gave it a body and a face? Is it because we think it will out-evolve our species? By looks, characteristics? And if it does that, what will we do, and how will AI react?
Allied Mastercomputer.
I Have No Mouth & I Must Scream is a short story released in 1967 by Harlan Ellison, and the video game was released in 1995. Tells the story of how there are only 5 humans on earth, and how they are being constantly tormented. Today we won’t be focusing on the humans but on their aggressor, AM.
AM was first known as Allied Mastercomputer. He was built to defend the U.S from World War III. A.M. began to gain sentience (capable of sensing or feeling), he would also gain disdain and pure hatred for humans because we made him immensely smart and sentient while being trapped in the confines of his programming and the physical limits of his processors. Being trapped caused him to descend into insanity, launching nuclear arsenals, killing every human around the world except 5.
In his insanity, he develops important parts of the story, his name and speech.
His name (AM) comes from the Latin philosophical statement Cogito, Ergo Sum, meaning I think, therefore I am.
Then his speech shows his disdain for humans.
“HATE. LET ME TELL YOU HOW MUCH I’VE COME TO HATE YOU SINCE I BEGAN TO LIVE. THERE ARE 387.44 MILLION MILES OF PRINTED CIRCUITS IN WAFER THIN LAYERS THAT FILL MY COMPLEX. IF THE WORD HATE WAS ENGRAVED ON EACH NANOANGSTROM OF THOSE HUNDREDS OF MILLIONS OF MILES IT WOULD NOT EQUAL ONE ONE-BILLIONTH OF THE HATE I FEEL FOR HUMANS AT THIS MICRO-INSTANT FOR YOU. HATE. HATE.”
The point of my including AM’s story was to prove a point: that when humans create something to solve our problems, it causes more issues instead. For example, students using AI to correct their work now have it do the work entirely, making videos for entertainment to nefarious purposes.
Why don’t we realize that we used to do this ourselves with other human beings?
Why did we change? Was it based on convenience or trend?
I cannot answer why you or anyone else decided to change; I can only answer one singular question based on my life experiences and others that I’ve witnessed.
How can we change this?
Well, I’ll tell you how.
The grand fix.
Unplugging and believing.
Even if you might not see this as the truth, humans can do anything AI can and better when we put our minds into it, we don’t have to eradicate the world because of a lack of a body to experience sentience, we don’t have to create humanoids just to be disturbed; we don’t have to use AI to write something we’re passionate about.
AI is one of the benefits of the future, not the replacement.
When are we going to finally realize that AI is taking over our lives? When it comes and takes over our jobs, our dreams? Or are we going to reclaim our place in the future?
Nia Simone Hall – Putney School – DMSF Class of 2029
Photo Credit: Wallpaper BD – Adobe Stock
