Of Artificial Intelligence and Human Folly

disciples, digital, AI, artificial intelligence, digital media

I signed the open letter calling for a “pause” on giant artificial intelligence (AI) experiments. However, I agree with decision theorist Eliezer Yudkowsky that a six-month moratorium isn’t nearly long enough, if not for all the same reasons he puts forth in TIME Magazine. Nevertheless, I signed it because 1) it’s better than not saying anything, and 2) if it succeeds, it’s a “foot in the door” that may permit us to ask for and get the much longer moratorium we really need. And more than giant artificial intelligence, we need real wisdom, which we sorely lack.

(For the purposes of this article, you don’t have to believe that we’re on the edge of an AI-driven apocalypse. It’s enough that it’s theoretically possible.)

Fundamental Priorities

The quote that best encapsulates the problem comes from a Washington Post article about tech firms like Twitch, Twitter, and Microsoft firing ethicists. One of the laid-off, Rumman Chowdhury, said that the open letter’s focus on future questions “may distract from problems that are real right now”:

“I think it’s easy when you’re working in a pure research capacity to say that the big problem is whether AI will come alive and kill us,” Chowdhury said. But as these companies mature, form corporate partnerships and make consumer products, she added, “they will face more fundamental issues — like how do you make a banking chatbot not say racial slurs.”

Of course, the point of the open letter is to avoid the humanity-killing scenario. To my unenlightened mind, that’s a very practical consideration indeed. An old-fashioned name for morality is “practical reasoning.” I think it’s just as easy when you’re working in a corporate environment to say that short-term issues affecting the enterprise’s bottom line are more fundamental than potential long-term consequences for humanity. We can survive rude, uncooperative chatbots like “Tay” and “Sydney” just as we survive rude, uncooperative people. I suppose it doesn’t matter whether or how we destroy ourselves so long as we do it politely.

Artificial Intelligence and Social Impacts

I mention “Tay” and “Sydney” because the WaPo article mentions these cybernetic critters right after Chowdhury’s philistine sniff at theoreticians. “Tay” was a chatbot Microsoft released in 2016 that they quickly dismantled after it turned into a white supremacist Holocaust denier. “Sydney” was a Bing (that is, Microsoft) chatbot released in February that would take on an aggressive, contradictory persona depending on how the user pushed it. Also, “Sydney” occasionally makes up information and presents it as fact. A related article tells us that search-engine bots “don’t consistently cite their sources and have even made up fake studies.”

As amusing as we could find such stories, we have to set them in the context of artificial intelligence and social media. We’ve known for years that you can just as easily find misinformation and disinformation on the Internet as you can find facts. However, social media has accelerated the tribalization of American politics and helped spread false information, “deep fake” videos, and “rage-farming” blogs and vlogs. (I wouldn’t be surprised if the original “QAnon” source was a chatbot.) It has also contributed to rising levels of psychological fragility among youth, especially young girls.

An increasingly fragile youth and polarized public square were not what the social-media giants intended. As you discover watching The Social Dilemma, they only intended to use psychology for farming metadata for other businesses’ marketing. It’s fair to say they didn’t—and still don’t—fully understand what they were doing. It never occurred to them that bad actors could manipulate their creations for evil purposes. Or that artificial intelligence algorithms could turn people into ideological zombies by turning them into dopamine addicts. Or that their algorithms could grow so complex that their creators can no longer completely comprehend them.

Common Sense and Business Ethics

Let’s rephrase that statement: The AI community doesn’t really understand what they’re doing. Ezra Klein points out that the average AI expert puts the probability at 10% that humans will be unable to control future artificial intelligence systems that can either destroy or severely disempower humanity. He has spoken to many who put that probability higher. Yet “many — not all, but enough that I feel comfortable in this characterization — feel that they have a responsibility to usher this new form of intelligence into the world.” What’s the difference between commitment and a “sunk cost” fallacy?

Now, suppose you know that you don’t fully understand what you’re doing and that it has a significant potential for uncontrollable, catastrophic consequences. In that case, common sense dictates that you immediately stop what you’re doing. However, the intellectual elite no longer understands “common sense” as prudential reasoning or good judgment. Rather, they view it as a collection of taboos and shibboleths the low-information masses use for lack of sophisticated knowledge. The potential rewards that giant artificial intelligence could bring are just as incalculable as the risks. Besides, movies like the Terminator franchise are just stories, right?

Except that the best stories don’t merely entertain us. They also ask us to look at ourselves and our world in different lights. The more appropriate tale would be the Jurassic Park franchise. John Hammond and his bioengineers are descendants of a literary tradition running back through Victor Frankenstein to Christopher Marlowe’s Faustus. Michael Crichton may not have foreseen the effects of social media on our children. But he did illustrate capitalism’s historical approach to tech: “Monetize first, indulge in theoretical ethics later (if ever).” They’re “too preoccupied with whether they [can] to stop to think whether they should.

The Consequences of Pride

But perhaps we can find a more appropriate story set in the real world: Christopher Clark’s The Sleepwalkers: How Europe Went to War in 1914. It is the story of how several men, without truly knowing each other’s intentions and acting within the narrow limits of their countries’ perceived self-interests, blundered their way into a civilizational catastrophe that still affects us today. All the men involved (yes, they were mostly men) were practical, largely unimaginative “men of affairs.” You know, the kind that considers themselves too busy with real-world problems to indulge in theoretical ethics.

Thomas Carlyle, the eminent Scottish essayist and sometime philosopher, was once scolded at a dinner party for endlessly chattering about books: “Ideas, Mr. Carlyle, ideas, nothing but ideas!” To which he replied, “There once was a man called Rousseau who wrote a book containing nothing but ideas. The second edition was bound in the skins of those who laughed at the first.” Carlyle was right. Jean-Jacques Rousseau wrote a book [i.e., his Discourse on Inequality] that inspired the ruthlessness of the French Revolution (and even more destructive things after that). (Benjamin Wiker, 10 Books that Screwed Up the World, 2)

This brings us back to Chowdhury and her curious belief that the racial sensitivity of chatbots is a more fundamental issue than the prospects of species-wide suicide by computer. Ideas matter. Engineering, including electronic engineering, is all about turning theories and ideas into practical uses. But so is morality—engineering our ideas about good and evil, rights and duties, and liberties and restraints into action. Ideas have consequences. In the Carlyle quote, for “Rousseau,” you can just as easily substitute “Machiavelli,” “Hobbes,” “Marx,” “Nietzsche,” or “Sanger.” All these writers’ ideas have made significant contributions to Western moral incoherence.

The literary tradition I’ve spoken of refers to the ancient concept of hubris: a pride so enormous that it invites the wrath of the gods. “Pride goes before destruction, and a haughty spirit before a fall” (Proverbs 16:18). The antidote to pride is humility, which recognizes and respects our limitations. Ultimately, what we can and should do are different matters. Practical reason tells us there are things we shouldn’t do even though we can. Perhaps a danger even greater than giant artificial intelligence is the foolishness of believing that limits exist only to be overcome.

Conclusion

We can quote many amusing yet disquieting stories to illustrate that artificial intelligence, in its present state, can act in unpredictable ways … ways that go beyond the usual bugs and crashes of ordinary software. Before we take giant AI any further, though, we must consider how to imbue it with a robust artificial conscience. Here we can go back to the literary tradition to rediscover and re-examine Isaac Asimov’s laws of robotics. But this should also be part of a larger conversation about wisdom, which is not the same as intelligence or mere cleverness.

In recent decades, various doomsayers have been predicting The End of the World As We Know It … so many that we laugh them off as crying “wolf.” But Aesop’s fable tells us that one day the wolf did come. The Trojans disbelieved Cassandra, writing her off as mad, and thus committed themselves to destruction. We can continue to roll the dice and depend on The Forward March of Progress to protect us from self-destruction. Or we can walk away from the table with what we’ve won so far. Because eventually, the dice come up snake-eyes.

Facebook
Twitter
LinkedIn
Pinterest

6 thoughts on “Of Artificial Intelligence and Human Folly”

  1. Pingback: The Chosen: Lightness and Letting Jesus Laugh - Catholic Stand

  2. Pingback: MONDAY AFTERNOON EDITION – Big Pulpit

  3. “An old-fashioned name for morality is ‘practical reasoning’.”

    You said a mouthful!

    Thanks for this essay, well thought out as always.

  4. an ordinary papist

    “ …It never occurred to (him) them that bad actors could manipulate their creations for evil purposes.” 

    Like the irony that the Nobel Peace prize was named after the person who invented dynamite. I was always told that the one condition to beware of, whether in books, movies or stage, is when evil, in the end, WINS. Today we have evil in all medias, relentlessly, sequel after sequel, in a continuous state of siege against good. This is
    normal. The ‘problem of evil’ is what God chose for us in order to evolve, I suppose.
    The difference today is that ‘good’ winning is tentative at best because ‘evil’ keeps on truckin’ and the producers of media won’t ever let us forget this – a good thing.

    “ Before we take giant AI any further, though, we must consider how to imbue it with a robust artificial conscience. “
    I was taught that the angels were imbued with “pure intelligence” Could you retain that
    quality without a conscience ? I know, free will. See ‘problem of evil’ noted above.

  5. Pingback: About artificial intelligence and human madness - Us Blog

  6. Pingback: Of Artificial Intelligence and Human Folly - City RSS

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.