What is Life?

Sophia the Robot

Back in 1989, an episode of the television series, Star Trek: The Next Generation, aired that posed an intriguing question. It’s a question that thirty years later generates even more head-scratching. The title of the episode was “The Measure of a Man.” At the focus of the story sat an android who represented the pinnacle of contemporary artificial intelligence.

For those of you who are not “Trekkies”, the character was named Data. He had been crafted by Dr. Noonien Soong, an accomplished cyberneticist. Data had many characteristics of a human. Aside from his disturbingly yellow eyes, he looked pretty much exactly like a normal human man.

Looks, however, can be deceiving. Data’s artificial cranium contained a supercomputer rivalling any in existence. He also possessed superhuman strength and the ability to work without the need of oxygen or shelter from the elements. He was quite a piece of work.

Notice how I used a personal pronoun to talk about Data. I said “he” rather than “it.” That little subtlety is at the core of the episode. The government in charge of the starship on which Data was stationed decided that the android was a machine – equipment owned by the government. As such, Data had no rights, no privileges, and no say regarding how he was to be used or treated. A trial ensued, in which Data was forced to prove he was a living being and not just a machine.

This story was set in the 24th century, but how far are we from a day when a machine will declare that it (he, she) is alive and should be afforded the same societal position as a red-blooded, flesh-and-blood human? How far? How about right now?

I’m stretching a little bit, I’ll admit. The machine known as Sophia did not actually ask for rights, but she has them all the same. The robot has been granted citizenship by the nation of Saudi Arabia and given an official title by the United Nations. Don’t believe it? Just read this …

https://www.smithsonianmag.com/smart-news/saudi-arabia-gives-robot-citi…

You read that right. A machine manufactured to approximate a human female appearance has more rights than actual human females in Saudi Arabia. Before you dismiss this development as merely hype generated around a clever bit of engineering, ask yourself one question. What is life?

There are only three words in that last inquiry, but it is a very profound question. How do we define the condition of being alive? How do you measure such esoteric things like conscience, sentience and the presence of a soul? Delving even deeper, what separates mere life from sentient life that must be respected for its independence? Does the aforementioned concept of a soul exist in all forms of life, or is it reserved only for lofty humans? Is that where we draw the line?

Elements of science, faith, perception, and reason blur when we are confronted with a machine (or a group of machines) that we cannot differentiate from the living. Once we admit that such advanced artificially produced intelligence does exist, then what? What happens when a machine actually does ask for respect? Any sufficiently intelligent creature (like a human) will come to realize itself as an individual – it will achieve self-awareness. At that point, self-preservation takes the front seat in the entity’s mind. Self-preservation calls for demands that existence and life, such as it is, be preserved. Once we commit to the admission that a machine is alive and self-aware, we cannot turn it off. It would be murder.

The machine would not be human at this point, but it would have achieved a level of life that has heretofore been reserved for humans – a level that includes a voice in its own destiny. If that sounds scary to you, you are not alone. Elon Musk, head of Tesla and SpaceX has been quoted as saying, “We have to figure out some way to ensure that the advent of digital super intelligence is one which is symbiotic with humanity. I think that is the single biggest existential crisis that we face and the most pressing one.”

https://www.cnbc.com/2018/03/13/elon-musk-at-sxsw-a-i-is-more-dangerous…

When we put AI in charge of critical decisions without keeping it on a leash, the impact to humanity could be profoundly bad. If we can’t employ a leash because the technology is considered alive and sentient, then mere humans will become a subservient class on the planet. This is the kind of thing Musk is warning us about. Some are calling him an alarmist for speaking like this, but we are talking about a guy who lives on the leading edge of everything to do with technology. I tend to think we should pay attention to his warnings.

Here’s a hypothetical situation for you. Suppose an artificially intelligent system is put in charge of a city’s power grid. That seems reasonable. The system could monitor usage and control the flow of energy based on predictions founded in historical and observational data. Outages could be prevented. Humans could avoid dangers associated with failing grid infrastructure. The AI could alert power company personnel when potential problems were emerging before they got out of hand. All good – right? Well, maybe. What if the system decided that it needed to cut power to small area of the city in order to avoid an overload in a much larger, more populated area? It makes sense on the surface because the needs of the many outweigh the needs of the few. But what if the small area contained an emergency trauma center? The center would have backup power, certainly, but long-term outages at a facility like that could impact human lives. People could die. You might counter that scenario with some sort of software rules that tell the AI that hospitals are off-limits when it comes to rationing power in an emergency, but the AI isn’t human, remember? It’s sentient. It has its own needs and its own rules. It could decide that the rule needs to be bent in order to serve a higher need. If you don’t think a machine might do something like that, then you are still thinking about machines as though they are merely machines. People make decisions like that all the time. A living machine would undoubtedly do the same – eventually.

AI has the potential to do great good. Artificial systems can be put to work to research cures for diseases, to streamline manufacturing processes, to analyze weather patterns, to search satellite images of open seas for lost ships and aircraft, and to bring other real, tangible benefits to mankind. We should not ban it. We should not seek to stifle its advancement. However, we do need to make sure that we, as its creators, remain in charge.

This article was written by Tilmer Wright, Jr. Tilmer is an IT professional with over thirty years of experience wrestling with technology. In his spare time, he writes books. One of his books, The Bit Dance, paints a picture of what could happen should AI find independence and freedom from the confines of human interference. It’s exactly as creepy as it sounds. You can find links to Tilmer’s books at the following link.

https://www.amazon.com/Tilmer-Wright/e/B00DVKGG4K?ref=sr_ntt_srch_lnk_2…

Photo of Sophia is used under the Creative Commons license and can be found here : https://commons.wikimedia.org/wiki/File:Sophia_(robot).jpg
A link to the license info is below. No changes have been made to the image. Image produced and owned by ITU Pictures [CC BY 2.0 (https://creativecommons.org/licenses/by/2.0)]