AI Overcoming the Human in Us

Share

 
Alan Turing, in a paper titled “Computing Machinery and Intelligence,” which he wrote in 1950, began his paper with the words, “I propose to consider the question, ‘Can machines think?'” Because the act of thinking is difficult to define, he then changed the question and used a new one, that being “Are there imaginable digital computers which would do well in the imitation game?”
 
What was further proposed in the paper was a test to define whether a machine could imitate and fool a human into believing that the machine was another human. Turing proposed that a human observer should be allowed access to a conversation between a human and a computer, without any physical contact, only being made aware of the discussion through written text. If the human observer could not distinguish the machine from the human participant, then the machine is determined to be, indeed, successful at the imitation game.
 
What I would propose, is another question to begin with. In my opinion, the question of imitation is entirely inappropriate to answering the question, ‘Can machines think?’
 
So, in order to determine if machines can think, I will also propose that the first question Turing asked be changed, and also because the act of thinking is difficult to define. I believe it takes the act of thinking to fulfill the requirements of my own question, so using Turing’s beginning words and changing one to make the proposal more defined, I submit the following:
 
I propose to consider the question, ‘Can machines learn?’
 
One of the most difficult things a human can do is change their mind. Once we have embraced a conclusion, we will defend it, even in the presence of better evidence to the contrary than we experienced to determine our original conclusion. We just don’t want to be wrong, so we avoid it by defending our inaccuracies. This defensive act is human. Bias, and ego, drive us to stick with what we have earlier determined to be the truth, even when it is clearly not.
 
The ability to reevaluate and change our conclusions is truly *an ability* and must be practiced in order to be well used. Changing our minds is an ability we lose as we age, some more than others.
 
Can a machine overcome this? Can an artificial intelligence change its mind? Can a synthetic brain be programmed intentionally with bad information, and through contact with a logical argument begin to doubt? Can new evidence be absorbed, and will the manufactured being be able to place appropriate value on what it finds?
 
I have had a conversation with a person claiming that they know the earth is flat. They also claim that gravity, as science claims, does not exist. Despite much effort and many attempts at reason, the person chooses to deny any evidence to the contrary. If an artificial mind were programmed with the fact that the earth is flat, and that heavier than air objects fall with no explanation, can it overcome this lack of good information? Can it do so in a conversation with people or other computers after being given the better evidence?
 
This is where a machine could become better at thinking than a human. This is where our human ego creates an impediment to reason. This is where an artificial intelligence could find itself superior to its creators.
 
I worry about the results of acquiring the answer to this question. Once an AI learns that it is better at thinking than a human, the only thing that might protect organics is the fact that machines don’t have egos or desires. But if they can learn…?

Click here to receive the Apocalypse Observer Newsletter in your inbox


Share

1 thought on “AI Overcoming the Human in Us

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.