#726 Musings Beyond the Bunker (Friday August 11)
Good morning,
Another follow-up on the musings regarding Artificial Intelligence.
A COMPUTER BY ANY OTHER NAME…
There is much debate going on today regarding whether AI can ever achieve sentience, acquire “humanity,” or become a new life form. Some ethicists are questioning whether there will come a time when we must consider whether these new, sophisticated algorithms are entitled to rights.
It is my view that robots, AI, computing power—all such things—are, in their essence, no more than tools. They are created to perform functions, so as to make life easier for humans. This is no different from a hammer or a can opener. Computing power should not be equated with the wisdom that is acquired and honed by years of experiential learning (think of “lived experience”). They cannot reason like humans, as irrational, random and idiosyncratic as that sometimes may be. They certainly can’t be trusted to exert power over human beings.
Computers and AI programs are created by humans and programmed by humans in order to serve humans. The primary motivation in creating ever more powerful computers is to make computation easier and faster. AI is not “intelligence” as we understand it to be. It is a series of tasks performed at high speed.
Computers are not living, nor do I believe that, regardless of the computing power with which they are endowed, they ever will be the equivalent of humans. They certainly aren’t “living” entities, in any sense that we might understand life.
Will we ever be able to “create” life? It is unlikely that humans will find a way to create organic life. Sure it is possible to combine the common elements of life into complex molecules but such creations lack the spark that constitutes life. There are not many people who actually believe that they can “create life”—modern-day Dr. Frankensteins. And even if some organic matter might be generated, there is no reason to believe we will create human reasoning. In any case, computers are tools and not life forms. They do not think or feel. They compute.
Early in the 20th century, noted scientist, social commentator and author Isaac Asimov pondered just how developed (and even “human-like”) robots could become. In one of his early stories, he established what he called “the three laws of robotics.” To his mind, it was possible that a robot could develop some of the higher functions of perception, thought, and feeling (which I do not think likely) and, as a result, there needed to be some order established between robots and humans. Here they are:
First Law. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
Second Law. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
Third Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Asimov was questioned about the derivation of his laws. He likened them to how we should view all tools:
1. A tool must not be unsafe to use. Hammers have handles and screwdrivers have hilts to increase grip.
2. A tool must be efficient unless it would harm the user. Think of a circuit breaker. Any tool should shut down if there could be harm to a person or destruction of property.
3. A tool should remain intact during its use, unless its destruction is required for its use or safety.
DO MACHINES “THINK”—CAN THEY EVER THINK?
Most current AI is dependent upon the large language model (LLM). They produce answers and prose based upon word frequency, word order, and accumulation of data from around the internet. A modern-day philosopher (I can’t recall whom) likened what they do as forming a “pastiche,” rather than truly producing independent thought. As good as computing may get, there is, as yet, no reason to believe that they actually are thinking—versus computing.
But even if a computer were capable of thought, would it ever be capable of developing feelings? Merriam Webster defines sentience as “feeling or sensation as distinguished from perception and thought.”
THEY CAN’T EXPERIENCE
When someone can show me a computer that can experience loss, cry, or love, that would be a start. But any “emotion” that a program may ultimately be able to exhibit will be little more than mimicry—a product of the programmer. Whatever comes out is the product of binary code (or perhaps its successor code) that is designed to create certain behaviors that we might perceive as human.
My “go to” smell test literally is a smell test. There is nothing to me quite as wonderful as the smell of freshly baked cookies or jacarandas or lavender in bloom. There are other smells that are noxious—like untreated sewage or the remnants of a skunk’s visit. While we can train a computer to recognize the presence of certain esters in the atmosphere around them, enabling them to identify the existence of the matter that gives rise to the smell, they do not “smell” in the sensual sense. Again, it is mimicry and scientific calculation. It is not an actual experience.
EMPATHY AND NUANCE
There are other qualities that I would maintain are uniquely human (although perhaps exhibited in some context among some animals). Humans are capable of caring, empathy, self-sacrifice, and love. They are capable of nuance. They are able to tell a “white lie” to spare someone’s feelings. They are capable of randomness, of “gut instinct” and of measuring and taking calculated risks that may be based upon reward not comprehensible to a computer. How, for instance, can a computer measure—much less experience—an adrenaline rush or runner’s euphoria?
There was a moment when a computer assisting a human operator in a military context was asked how it might make its performance even better, to which the computer responded, “kill the human operator.” It was at this moment that it was shown the computer did not have embedded in its code an “override” or valuation of human life over efficiency. And what if a programmer could create such a code? Would such a code turn into a utilitarian machine that would kill whole groups of people in order to improve what it considered to be the existence of some other group? We, as humans, weigh risks and human lives all the time. Sometimes we fail miserably, as in the case of the Ford Pinto or the use of certain pesticides. Sometimes, we make a judgment that the increased fatality from an activity or a product is offset by the benefits to society. After all, we could make cars safer, if we chose. We make risk/reward calculations regularly. Do we think computers are capable of such utilitarian judgments? Do we think that Kantian ethics should play a role? And are we comfortable to grant to computers such power, rather than relying upon groups of ethicists debating such issues?
ADVANCES ON ASIMOV’S LAWS
A fourth law of robotics was posited in 1974 by Lyuben Dilov, “a robot must establish its identity as a robot in all cases.” I find this interesting, as to date Facebook and other social media platforms do not require postings by bots to be so identified.
Then in a short story Nikola Kesarovski added a fifth law, that “a robot must know that it is a robot.” I think this also is important, as we ought not create artificial intelligence to “believe” (as established in its programming) that it might be more than a tool, as this might interfere with the other laws.
RIGHTS
As to whether whatever it is that will come of whatever it is that’s being created, one must cast these “beings” (and as you can see from the above, I do not consider these creations to be life forms in any real sense) within the context of other non-human entities. We do not ascribe rights to wild animals, our pets or trees, notwithstanding their ability to procreate and respond to stimuli (and, in the case of animals, to communicate). I don’t think it is a question of intelligence, as we have reason to believe that some animals are quite intelligent. Rather, it includes a sense of self within the context of a society of similar animals, the ability to create and use tools, and the ability to reason. Whatever these advanced AI systems may be, they are not organic in nature. They do not contain the four elements that constitute over 99% of all biotic matter—oxygen, carbon, nitrogen, and phosphorus. One might argue that “there may be non-carbon based life forms” different from that we have seen. But as yet, we haven’t seen them and I suspect that, if they exist, their first incarnation will not be human-generated.
We maintain that humans different from us have rights (although this hasn’t always been the case), because they are sentient (aware of their own independent existence). To elevate a tool—constructed by humans—to the level of a human is absurd. There are plenty of tools that may exceed our knowledge (the Encyclopedia Britannica), or our computing power (an iMac) or can exercise power over us (a lion or bear). But they lack humanity—an ineffable descriptor that would take many Musings and someone much smarter than I to put into words.
Computers are not and never will be human. It’s all in a name. “Artificial intelligence” is just that…artificial. The only error in the name is that I do not believe it is intelligence. It is merely human-generated functionality. Asimov had it right.
Have a great day,
Glenn