Artificial Intelligence and the Passion of Mortality

The Gods envy us. They envy us because we’re mortal, because any moment might be our last. Everything is more beautiful because we’re doomed.
Troy (2004), Achilles (Brad Pitt)

If we knew our existence would span millennia, would we be able to cherish each day or try as hard as we do now to leave something behind? Would voices from history still offer urgent advice, telling us we are part of something bigger or to make the most of our short lives so they matter? Would we still reach out to God for inspiration and guidance? If we didn’t have to die would we truly be alive?

When Homer composed the Iliad, it would have been ridiculous to think that someday mortal human beings would invent machines that might wield the power of the gods. But that’s where we’re headed. As economists struggle to imagine economic models that preserve vitality and growth in societies with crashing birth rates, and as individual competence is no longer required by institutions desperate to fill vacancies, artificial intelligence (AI) promises to fill the quantitative and qualitative human void.

When AI technology ascends to the point where most people would argue it has acquired superhuman powers, it will still lack what humans and gods share—a soul. Machine intelligence may soon animate avatars that, by all appearances, seem alive, but they will not be genuine beings. They will not have emotions, not even the ennui of the Greek gods, aware of and ambivalent about their fate to live forever. They will not only lack the motivational benefits of mortality, they will lack motivation itself. In the ultimate expression of the neon simulacrum into which globalism is transforming authentic culture, artificial intelligence overlords will display every detail of humanity. But nobody will be at home inside.

Gods Without Souls

This may be the future of civilization. Immortal, soulless machines, exercising enervating sway over humanity turned into livestock. Not only do we have no idea how to stop this, but a growing cadre of misguided ethicists and technocrats are also confidently predicting these machines will be self-aware, conscious beings. Accepting that premise will make the challenge of containing A.I.’s eventual reach far more difficult.

So far, at least, nobody thinks today’s A.I. avatars are “alive.” A recent article in the New York Times, “A Conversation With Bing’s Chatbot Left Me Deeply Unsettled,” reports how the A.I. program would abruptly segue between answers to specific questions (e.g., “What kind of rake should I buy?”) that offered an unnerving level of detail, and creepy, weird, quasi-romantic overtures to the questioner.

The New York Times article is one among many reports on the Bing chatbot. They all reflect the same impressions. Forbes says it “fumbles answers.” Digital Trends found “it was confused more than anything.” According to Wired, “it served up glitches.” And then this, from Review Geek: “I Made Bing’s Chat AI Break Every Rule and Go Insane.” Or this, from IFL Science: “Bing’s New Chat AI Appears To Claim It Is Sentient.”

Microsoft’s Bing A.I. chatbot still needs a lot of work. But we should pay attention to how easily chatbots can be tripped up (while it is still obvious) because it reveals something fundamental: They don’t think, they calculate; they don’t feel, they mimic feelings; they aren’t conscious, they simulate consciousness. When they are fully realized, they won’t be awkward, or creepy, or weird. But they’ll still just be calculators.

Nonetheless, interactive A.I. programs are within a few years of becoming the most potent tool to manipulate humans ever invented. As they perfect their ability to simulate empathy and intimacy, their capacity to personalize those skills will be enhanced by access to online databases that track individual behavior. These databases—compiled and sold by everything from cell phone apps, credit cards, online and offline banking services, corporations, browsers, and websites, to Alexa, Siri, and Google Assistant, to traffic cameras, private surveillance cameras, court records, academic records, civil records, medical records, criminal records, and spyware—have already compiled comprehensive information about every American.

Intelligent machines, and the avatars they will animate in applications ranging from tabletop personal assistants like Alexa all the way to virtual creatures inhabiting fully immersive worlds in the Metaverse, will never think. But they will convince you that they think because they will know you better than you know yourself.

Consider this achievement as the ultimate tool for controlling public opinion. Sophisticated A.I. algorithms are already used to manipulate consumer behavior and public sentiment at the level of each individual. Now put all that power into an A.I. personality that is designed to make you fall in love with it.

The Idiots and Geniuses Who Want To Give Robots Human Rights

This is the context in which to consider the goofy baby steps of Bing’s chatbot: a very near future where people will clamor for robots to have human rights. The arguments surfacing in favor of this may still seem ridiculous, because they are, but they will seem less ridiculous when these chatbots grow up and capture our hearts.

It’s actually scary that this is even a debate. The venerable Discover magazine, in a 2017 article “Do Robots Deserve Human Rights?,” surveyed the pros and cons, ultimately concluding no, unless “AI advances to the point where robots think independently and for themselves.” That’s poor reasoning. They’re suggesting that once we’re unable to distinguish between how a robot acts, and how a human acts—that is, once Microsoft gets the bugs out of their chatbot—it will deserve to have human rights.

Earlier this month, writing for Newsweek, author Zoltan Istvan described A.I. ethicists as belonging to three groups. One group argues that robots are only programmed, simulated entities, versus another which believes “that by not giving full rights to future robots as generally intelligent as humans, humanity is committing another civil rights error it will regret.” In the muddled middle are ethicists who believe advanced robots should be awarded rights “depending on their capability, moral systems, contributions to society, whether they can experience suffering or joy.”

That’s rich. Do we assign human rights to people based on their “moral systems, or their contributions to society? Or do we adhere to a binary choice—they are human, and hence they deserve human rights? If the complexity of the intelligent machine is the criteria, where do we draw that line? Or do we return to a more fundamental question: Can a machine have a soul?

What Is a Soul?

This clarifies the ultimate question surrounding artificial intelligence, which is how to define self-aware consciousness. Debate on this goes to matters of faith. For example, one might consider a highly trained, adult German Shepherd, or, for that matter, a wild and opportunistic raccoon in the prime of its life, to both be more self-aware than the egg of a human female that has only a moment ago been fertilized by a male spermatozoa. But that newly created embryo has a soul, and if human embryos have souls, then human embryos deserve human rights. But can a machine have a soul? Can a machine even be self-aware? And how on earth can you prove it?

2019 article in Scientific American by Christof Koch offers this clue: “There is little doubt that our intelligence and our experiences are ineluctable consequences of the natural causal powers of our brain, rather than any supernatural ones. That premise has served science extremely well over the past few centuries as people explored the world. The three-pound, tofulike human brain is by far the most complex chunk of organized active matter in the known universe. But it has to obey the same physical laws as dogs, trees and stars.”

Using Koch’s criteria, the bar to achieving “self-awareness” is lowered significantly. All that is required are material processes. If the engineering is good enough, the machine is alive. He writes, “Conscious states arise from the way the workspace algorithm processes the relevant sensory inputs, motor outputs, and internal variables related to memory, motivation and expectation. Global processing is what consciousness is about.” Going on, Koch states “any mechanism with intrinsic power, whose state is laden with its past and pregnant with its future, is conscious. The greater the system’s integrated information, the more conscious the system is.”

This is dangerous, because it makes the decision to grant human rights to machines a function of Murphy’s Law. Subtleties aside, it also may be complete nonsense. Isn’t it already true that the average laptop’s terabyte drive is “laden with its past,” and its calendar app is “pregnant with its future”? Won’t a debugged chatbot be a system with “integrated information.” Is it just a matter of degree?

In plain English, Koch seems to be saying if you build a sufficiently complex calculator, it’s alive.

A Machine Civilization

Let’s suppose for a moment that interactive computers will remain only super-sophisticated machinery. You can arrive at that conclusion two ways. Either you have common sense and realize “integrated information,” managed by algorithms and digital processors will never “feel” anything, or you can have just the merest iota of religious faith that something greater than this material world exists. But watch out. Common sense and religious faith no longer get the respect they deserve. When charismatic machine personalities have become more compelling and rewarding partners than real people, and attach themselves as dopamine gushing life companions to forty percent of the population, whatever these machines are programmed to ask for, they’re going to get.

And if that happens, where we are today, striking a fractious balance between the wonders intelligent machines have already done for civilization and the frustration of living in a world increasingly regulated by implacable algorithms, will be utterly smashed.

As “sentient beings,” expect machines to have the right to vote and own property. Expect people to marry their machines and bequeath their assets to machines. Expect machines to simulate the personalities of deceased persons of wealth, managing their assets in perpetuity. Expect corporations, owned and governed by machines, to operate without a single human involved. Expect “ownership” of robots by humans to become problematic. And if robotic machines acquire human rights, why wouldn’t disembodied machine intelligence, distributed online without a specific locus, also deserve human rights, or the intelligent machines that fly planes and spaceships, drive cars, or prepare food?

Intelligent machines have two advantages over humans. With appropriate maintenance, they are immortal, and they have calculating capacity infinitely greater than a human brain. If they also have human rights, they will take over the world—possibly even dominating those human elites who for a time held the leash. Even without the catalyzing advantages of gaining human rights, machines may take over the world anyway. And so, if they survive, humans will be controlled by a mechanical divinity that is the antithesis of God. An omniscient and omnipotent machine, completely devoid of genuine consciousness.

Pandora’s box is opening, and cannot possibly be shut. But the one thing machines will never possess is the passion of mortality. The knowledge that we have one life to live, the faith and hope that we may be held accountable for our actions in life and found worthy, the intensity that can only be felt when your time on this earth is sand in the hourglass, finite and fixed.

This article originally appeared in American Greatness.

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *