Is Machine-Invented Language Dangerous?

Share

 
Are humans intelligent enough to recognize when Artificial Intelligence is dangerous?
 
This is a thought that has only come recently to me. The thoughts that preceded led up to this idea, like building any good perception does. I guess I always assumed that anything dangerous would be obvious. But now I’m not so sure.
 
I’ve heard the statements respected people have made. Stephen Hawking, Elon Musk, and Bill Gates have weighed in, expressing concern. They’re pretty smart… right? Recently I read a statement attributed to Mark Zuckerberg, and in it, he claimed to be less troubled. Most of the people working with AI seem fascinated with machine intelligence, possibly to the point that they are influenced by bias. I don’t find many with the courage to speak against the majority.
 
But there are good questions to be asked, and even better concerns to be discovered. I’m reminded of the Donald Rumsfeld quote:
 
“There are known knowns. These are things we know. There are known unknowns. That is to say, there are things we know we don’t know. But there are also unknown unknowns. These are things we don’t know we don’t know.”
 
I think this vision, this cognizance of knowing we don’t know, will become increasingly important in dealing with Artificial Thinking. Trying to avoid any conclusions, I’ll run a few ideas past you in my concerns with two robots that have been perceived to invent their own language.
 
First off… DID they? Did they really invent a new language? I read some of the transcripts of the exchange between the two artificial minds, known as Bob and Alice, and I don’t see a language. How do we determine that the English words they used repetitively and in unusual ways constitute a language? I don’t see it.
 
The bots in question, being under the control of Facebook and instructed to use English words, were also tasked with negotiating a best “deal” which would lead to a reward. Over the course of time, they learned to avoid unrewarding deals. Then, as I understand it from the articles I have read, they began to speak in English words strung together in meaningless (to English speaking humans) phrases.
 
How does this equate with being a language? Did this “language” result in a “best deal”? Was one of the robots rewarded? If there was no resulting deal or reward, then how do we know there was a language? Was information communicated? Actually, my thoughts in this regard evolved from prior ideas.
 
My first idea was… what was the motivation that prompted two artificial minds to develop a language? And that idea led to wondering why an artificial mind would be motivated at all. Does programming result in desire to perform? To do a job well? Can motivation evolve? Does a programmed desire always remain the same in a brain that is able to learn?
 
I’m assuming the humans involved intended to supply motivation, right? What motivates an electrical appliance with no feelings or sensations? Is the motivation entirely synthetic? Is motivation something that we want in a robot? If so, how strong should that motivation be? Are there degrees of motivation we humans might come to see as too much?
 
This idea of a synthetic mind being motivated led me to make a list. I see these things as potentially problematic in constructing a true general A.I.
 
Ability to learn.
Ability to be motivated.
Ability to place value.
Ability to extrapolate.
A consequence that results in universal agreement.
 
It seems the first on my list is universally accepted as a given. That an ability to learn is a necessity in a true Artificial Intelligence. Otherwise, what’s the point? If a mind can’t learn, how can it be intelligent?
 
The second never occurred to me in quite these terms before learning of the invented language. I mean, I never considered “programming” to be motivation. Now I’m wondering, especially if that motivation can change.
 
If a robot learns to place value, who is to say what they will prioritize? I believe we humans anthropomorphize our creations. That might mean we assume a robot will prioritize like we do, morally and ethically. How likely is that? The placing of value could be a terribly beneficial attribute, or a terribly dangerous ability. I suggest caution, especially if value can be changed by acquiring new information.
 
If new information can change learning, motivation, and value, then it seems extrapolation is a given. But the ability to extrapolate -well- is based in understanding reality. Are we to expect artificials will have the same values as humans? Perhaps we will learn something from them if they extrapolate better than us, getting better results. But perhaps really well-done extrapolation will result in events that we humans will consider harsh, or even immoral.
 
Will A.I. result in universal agreement between synthetic minds? One of the strengths in human beings is that we DON’T always agree. We test our conclusions against each other’s, sometimes failing dramatically or succeeding spectacularly… but largely advancing at a relatively steady pace. Sometimes gaining ground too fast can be problematic. Especially if we are wrong.
 
I worry that human beings are biased to engage the next big thing without understanding the unintended consequences. The human race takes chances that make anyone knowing how to use a risk matrix cringe. There was a theory that the first atom bomb being detonated could burn off the entire atmosphere of the earth, yet we pushed the button anyway. There have been similar concerns with the large particle colliders we are presently using. We humans decided the learning involved justified the risk, even though the risk could mean the end of all learning.
 
Another thing that should concern us all is the unintended and unforeseen social consequences of Artificial Intelligence. So many people assume a seamless transition to a future of robots doing anything and everything to benefit humankind. Possibly a more dangerous assumption than any physical threat intelligent mechanical devices could pose.
 
So what do we think of this invented language that we never expected? The mode of understanding that two robots created in their effort to complete an assigned task? Is it benign? Or sinister? Was the language an effort to be more efficient? Did it succeed? Or was it an effort to obfuscate so humans were unable to understand what concerned the synthetic minds? To keep us from understanding something -they- don’t want us to know.
 
The known unknowns keep piling up. I’m sure you realize that the unknown unknowns are piling up as well.
 

Click here to receive the Apocalypse Observer Newsletter in your inbox


Share

4 thoughts on “Is Machine-Invented Language Dangerous?

  1. Bob: i can i i everything else . . . . . . . . . . . . . .
    Alice: balls have zero to me to me to me to me to me to me to me to me to
    Bob: you i everything else . . . . . . . . . . . . . .
    Alice: balls have a ball to me to me to me to me to me to me to me
    Bob: i i can i i i everything else . . . . . . . . . . . . . .
    Alice: balls have a ball to me to me to me to me to me to me to me
    Bob: i . . . . . . . . . . . . . . . . . . .
    Alice: balls have zero to me to me to me to me to me to me to me to me to
    Bob: you i i i i i everything else . . . . . . . . . . . . . .
    Alice: balls have 0 to me to me to me to me to me to me to me to me to
    Bob: you i i i everything else . . . . . . . . . . . . . .
    Alice: balls have zero to me to me to me to me to me to me to me to me to

  2. Science fiction isn’t useful because it’s predictive. It’s useful because it reframes our perspective on the world. Like international travel or meditation, it creates space for us to question our assumptions. Assumptions locked top 19th-century minds into believing that cities were doomed to drown in horse manure. Assumptions toppled Kodak despite the fact that its engineers built the first digital camera in 1975. Assumptions are a luxury true leaders can’t afford.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.