The Briefing, Albert Mohler

Thursday, June 1, 2023

It’s Thursday, June 1st, 2023.

I’m Albert Mohler, and this is The Briefing, a daily analysis of news and events from a Christian worldview.

Part I


Well, Are Humans Facing the Threat of Extinction by Artificial Intelligence or Not?: The Argument Over the Consequences of AI Rages On

Well, are we facing an imminent threat of extinction or not? And if people actually believe that we are, would it be located in a story on page six? All kinds of interesting questions coming up. After yesterday, a raft of news media reported that tech leaders were warning of a risk of extinction from artificial intelligence. And there’s a story here, a real story here.

First of all, because the AI issue, the artificial intelligence issue, is deservedly front and center right now in a lot of our conversation. And there’s a reason for that. Artificial intelligence. Now, remember, intelligence in this sense is at least mimicking a human intelligence at some level. It’s not at an advanced level, it’s not an equal level, but the use of the word intelligence in this sense, is at least mimicking to some degree human intelligence.

Now, as you’re thinking about artificial intelligence, just take the word intelligence, put artificial in front of it. Now, some people would say, “Well, isn’t that just simply what a computer is?” And yet, no. That’s not what a computer is. Computers process, but computers do not think, or at least at this point, computers haven’t thought, or at least we thought they weren’t thinking.

But as you’re looking at this in a very serious vein, artificial intelligence has all of a sudden arrived in a way that has surprised even many of its developers and proponents, even many writing science fiction, they’re behind on this. And so the release of products such as ChatGPT and others.

Just in the matter of the last several months, has at least served public notice that something in the lines of a technological leap, is now taking place and it’s right before our eyes. Whether it’s truly an advance or not, it’s going to take some time for us to understand. But not only time is going to take some moral framework in which we can make the evaluation.

But we do need to note that even as these technologies are now arising and they’re very much a part of our public conversation, they’re increasingly being immediately interwoven into the operations of some corporations. Already you have public institutions such as universities and colleges trying to figure out, what does this mean? “Has this student really done the work turned in in this paper?” Huge questions about intellectual ownership. Huge questions about human responsibility, vis-a-vis the machine. But we are also looking at the fact that this is happening faster than actual human intelligence can process the big questions.

Now at this point, we just need to recognize that the giant, indeed quantum leaps in technology that we’ve experienced in recent decades, they’ve eventually pointed to the fact that these machines, which after all acted as if they were thinking, may actually according to some of the engineers have the capacity of thinking, thus the word intelligence. And that’s why the development of something like ChatGPT and the other commercial brands out there, it really has changed the moral discussion because we’re looking at the anticipation that something’s right around the corner.

Now, the most vivid understanding of what might be right around the corner as a threat, is the fact that AI could turn against the humans, against the humanity that invented it. This is a very old human scenario. It is particularly a scenario that is played out in the imagination in the modern age. But what makes this news story particularly newsworthy, is that in this case, so many of the leading technology figures when it comes to artificial intelligence, they have themselves gathered together and signed a statement warning about the fact that the very technology that they have been developing, might pose a risk of extinction to the entire human race.

Now, to unpack that, we’re going to have to think about this for a moment. Kevin Roose of The New York Times starts his account this way, “A group of industry leaders warned on Tuesday that the artificial intelligence technology they were building might one day pose an existential threat to humanity and should be considered a societal risk on par with pandemics and nuclear wars.”

Now, let’s just think about that opening paragraph for a moment. That really does sound troubling. And indeed I’m not making light of it. I am pointing to a basic incongruity in moral terms that should come to our attention. With that opening paragraph, we’re told that a group of industry leaders who had been involved in developing artificial technology, the way that The New York Times puts it, they’ve warned about the artificial intelligence technology they were building, and now they who have been building it are warning us that it might, “One day, pose an existential threat to humanity.” Now, let’s just remind ourselves existential threat means a threat to the existence of humanity.

So, let’s just look at that opening paragraph in that news story in the front page of Wednesday’s edition of The New York Times. We’re being told that a group of scientists is warning us that they have brought humanity to the point of a possible extinction by a technology that they have been developing.

Yes, seriously, that’s exactly what they’re saying. A one sentence statement that was released by the scientists through the auspices of the Center for AI Safety was this, “Mitigating the risk of extinction from AI should be a global priority alongside other societal scale risks such as pandemics and nuclear war.”

Now, wait just a moment. We’re being told here that what is sought is the mitigation or the lessening of, the reduction of “the risk of extinction from AI.” Extinction? Exactly what extinction are they talking about? Well, they’re talking particularly about human extinction. So here you have some of the leading scientists in the field, and by the way, that’s not an exaggeration.

Those who had gathered together to make this statement really do represent so many of, if not most of the leading theorist engineers and scientists in this field, the very people who’ve been developing these technologies, now, they tell us that they’re concerned that what they are creating could lead rather quickly, as a matter of fact, to nothing less than the extinction of the entire human race.

Later in the Times report we read this, “The statement comes at a time of growing concern about the potential harms of artificial intelligence. Recent advancements in so-called large language models, the type of AI system used by ChatGPT and other chatbots have raised fears that AI, artificial intelligence could soon be used to the scale to spread misinformation and propaganda, or that it could eliminate millions of white collar jobs.”

A particular interest is one paragraph in the news article. I would try to describe it, but just hear this, “Eventually, some believe AI could become powerful enough that it could create societal scale disruptions within a few years if nothing is done to slow it down, though researchers sometimes stop short of explaining how that would happen.” This really does tell us something about humanity, about human nature, human thinking and human behavior.

Because here we are looking at the fact that the scientists who after all have been driving this technology, they’re the ones who’ve been inventing it, they’ve been innovating, they’ve been developing it, making it more sophisticated, packaging it for public use. Now, they tell us they might just have created something that will bring humanity to a complete end.

It’s also interesting that a newspaper as influential as The New York Times would summarize the statement by saying, that the scientists had stop short “of explaining how that would happen.” Is that because they don’t know or is that because they don’t want to tell us?

Behind this is also a call even from some of the people who were the leading innovators in this technology. There are now calls to (a), put down a moratorium on any future developments until some of these issues can be worked out. Or (b), create some kind of government agency that would give oversight to this technology as it develops. Or (c), at least come up with some way of understanding how a response to this technology might mitigate its dangers.

But as you’re looking at this, you recognize this is a parable of humanity. Christians operating out of a biblical worldview. We just have to understand a couple of things that come immediately to the fore. For one thing, you have the ability of human beings to create mischief on a massive scale.

Mischief on a massive scale in terms of modern technology has just increased that problem exponentially. It was one thing when warfare was a matter of throwing spears at one another. It’s another thing, when you have the development of explosive devices. It’s another thing, when you create airplanes that can drop those devices. It’s another thing, if you come up with thermonuclear weapons able to be sent on missiles, traveling now at hypersonic speed.

Humanity has so often developed its technology towards extremely lethal ends. And at some point we all have to ask the question, “Will those lethal ends be directed eventually at us or at all of us? Is that what we have unleashed?” But there’s something else here, and that is, that there’s a deep understanding of the morality of knowledge, and this is something that is so crucial to the biblical worldview.

As a matter of fact, it’s so crucial that it takes us to the tree of the knowledge of good and evil, in the Garden of Eden. That was the tree, the fruit of which Adam and Eve were told not to eat, and yet they sinned. They broke the command of God and they did eat of it, and then, from that point onward, they and all of their descendants. And that means all of us, we have the knowledge of good and evil. And as a matter of fact, we cannot escape the knowledge of good and evil.

We can put knowledge and intelligence to good ends or we can put knowledge and intelligence to evil ends. And one of the things we need to recognize is that nothing on planet earth is truly morally neutral. If you create this kind of technology, it can be used potentially for good, but at the same time it can be used for evil. Especially when we reach our age, we have reached a moment in human history where we actually have the technology to do enormous damage to the entire planet and to ourselves with the weapons that we create. Now, it turns out that one of those weapons might be something that is identified as artificial intelligence.

But there’s another huge worldview dimension here, and that has to do with what intelligence actually is. Because even as you’re looking at the news reports, National Public Radio covered the story and it was the lead story on its website for a while with the headline, “Leading experts warn of a risk of extinction from AI.” Extinction? What kind of extinction? Well, they mean, human extinction.

So there’s simply no doubt that this new technology comes with very genuine risks, but here we have to watch the language. And from a Christian worldview, the language becomes really, really important because we are talking about the word intelligence. And at a certain level, intelligence is not an exclusively human measure or an exclusively human capacity.

On the other hand, when we’re talking about human intelligence, we’re not just talking about a greater intelligence, we’re talking about a different category of intelligence all together. And that becomes very clear when you just observe the difference between, for example, a human being and a dog. Again, all dog lovers I think would agree on the fact, that dogs can be incredibly smart, they can be incredibly intuitive. They also have some senses that human beings do not have. They smell many things that we do not smell. And at least on most days, I think I’m thankful for that.

But when it comes to analysis, when it comes to self-knowledge, when it comes to the ability to conceptualize, human beings are in an entirely different category. And of course, the Bible explains this not just by greater intelligence or greater cerebral circumference or greater brain mass, instead, it describes this as being made in God’s image. That’s nothing that can be reduced to the merely physical or physiological. Made in God’s image means, that as image bearers of God, we have the capacity, first of all to know him.

Our dog might like to put himself right in the sunlight because it’s warm, but the dog does not thank God for having created a world that gave us the sun as the source of warmth. You have a completely different analytical process going on here. But being made in God’s image also means, that we have this enormous capacity to imagine not only factuals but counterfactuals. We can imagine not only what is, but what might be. Not only what is, but what might have been. And arguments about facts and counterfactuals are very much at play in this kind of headline news story.



Part II


The Morality of Knowledge and Technological Determinism: Big Worldview Issues Behind AI Technologies

One other issue we need at least to note here, is that there is the assumption that the world is simply a part of a larger cosmos that’s running on its own, and the human beings somehow have the capacity to basically destroy the cosmos, or at least destroy our spot of the cosmos, or at least destroy ourselves as a species. That itself is a set of assumptions, is actually also a demonstration of hubris that is of human pride. We really think that we could destroy the earth.

Now, that doesn’t mean that Christians don’t believe that we have moral responsibility, because actually the Bible says very clearly that we do. We are put on the earth in order to till the ground, to bring about a crop. We are given dominion, but we are also given the command of stewardship. We will answer to the creator for how we have used and not abused his creation.

And by the way, that stewardship becomes particularly acute, and this is the entire logic of the pro-life movement. That stewardship becomes particularly acute when we are talking about our stewardship of human life. Every single of human life. At every point of development, from fertilization until natural death. Every single human life. It’s also true that as human beings, we can scare ourselves. And by that, I don’t just mean technologically, I mean morally. We can scare ourselves with a realization, that we are doing something that is far more dangerous than we can handle.

In this New York Times article, we read this section, “These fears are shared by numerous industry leaders, putting them in the unusual position of arguing that a technology they are building and, in many cases are furiously racing to build faster than their competitors, poses grave risks and should be regulated more tightly.” That’s an astounding paragraph.

Here you have the acknowledgement that many of these scientists and technologists are actually rushing to develop this technology out of a sense of competition. But the one thing that unites them, is the fact that they’re scaring each other and they’re now calling upon the government to stop them all, or at least bring some kind of authority into this picture before some of them or all of them, create something that will mean the end of the human race.

One of the worldview issues that is very much in the background and is now being thrust into the foreground is technological determinism. That’s something we need to face. Technological determinism is the argument that technology will win in the end, technology will out. Technological developments are inevitable.

Now, if you’re wondering if technological determinism is a responsible worldview, just understand that during World War II, technological determinism was a major moral issue among the allied forces, and in particular with the development of the atomic bomb. The moral argument behind technological determinism was made by some of the physicists involved in that project, particularly in the Manhattan Project as it was known. The argument was this, “Someone soon is going to develop this technology better that it be those on the side of freedom and democracy and human dignity than those who do not.”

Now, was that a legitimate moral argument? It was seen so at the time. In retrospect, all of us would have to acknowledge that we should be extremely thankful. That by the way, largely because of its own stupidity, and that means including its racism against Jewish physicists. The Nazis did not get the atomic weapon and the allies did. Now, the use of the two atomic bombs that brought World War II to an end, they were not used against Nazi Germany. Nazi Germany was already defeated by then. But the point is, that if Hitler had had these bombs, it likely would not have been defeated.

But nonetheless, the two bombs were used to bring about the end of the war in the Pacific, and it is also interesting that as you’re looking at this moral argument, this moral argument is not, so far as we can tell, taking place in other nations with other major worldviews that would be world competitors to the United States and our allies, rather, once again, these moral arguments are erupting in the West where there’s the freedom not only to create such things, but there’s also the freedom to speak about our concerns about what these innovations, developments, and inventions might mean.

You didn’t have that freedom in Nazi Germany. You do not have that freedom right now in Communist China. Another factor to watch in these most current debates is the fact that a distinction is being made between two technologies, artificial intelligence on the one hand, and artificial general intelligence on the other hand. So what’s the difference?

Well, artificial intelligence as a term refers to machine intelligence. You could say artificial, you can say machine technological intelligence, but not something that is claimed to be comparable to human intelligence. This artificial general intelligence is a massive leap, and this means an intelligence far closer to human intelligence. And then the question is, how far are we from the development of that technology?

The news report on this new statement released by these scientists concludes this way in the times, “The urgency of AI leaders warnings has increased as millions of people a turn to artificial intelligence, chatbots for entertainment, companionship, and increased productivity, and as the underlying technology improves at a rapid clip.” One man cited in the article who signed the statement said, “I think if this technology goes wrong, it can go quite wrong. We want to work with the government to prevent that from happening.”

Okay, let’s put this in a different moral sphere. Let’s assume that you are in charge of a playground. And over here are some children and they’ve been up to something, but it hasn’t looked particularly threatening. They’re playing on the playground that looks like what they are there to do.

But what if one of the children or a couple of the children or three hundred and fifty of the children signing a statement come up and say, “Look, here’s what we fear. We’ve started something we can’t finish. We’re afraid that we have unleashed something on the playground, that could threaten the existence of the entire playground culture.” And thus these children with their manifesto come to the adults and say, “Save us from ourselves.” And this is where adults would step into a situation and well, save these kids from themselves.

That would require rules, that would require authority, that would require an entire culture of understanding who’s in charge and who’s not. But here’s where you see something that’s basically disingenuous in this kind of statement. The very people who here are saying to the government, “You need to do something to stop this. At least put a moratorium on this. Gain control of this before we do something horrible.”

They’re the very same people who are saying out loud, “We’re going to continue to develop these technologies even with our knowledge of the risk, because if we don’t, someone else might.” And in this case, it is not like there’s a race against Nazi Germany. In this case, it’s a race against another private venture firm.



Part III


No, AI Will Not Lead to Our Extinction, But We Do Need Protection — From Ourselves

Okay, a couple of other thoughts. We have to bring this to a close. Number one, this was a front page news story in the print edition of The New York Times. Right on the front page above the fold, that’s Prime Media Real Estate. The New York Times says, “This is a really important story.” And by the way, their coverage of this was actually very thorough.

I mentioned the National Public Radio, again, a pretty liberal news source, and it’s operating clearly from its own worldview. The headline, “Leading experts warn of a risk of extinction from AI.” You’ll notice that that term risk of extinction, or just the word extinction is common here. The word extinction is the one crying out in these headlines.

It’s also very telling that The Wall Street Journal, which after all might argue from both sides of the fence on this, from the commercial application, opportunity for economic development hand, and also from the other hand about the moral context and appropriate warnings. The Wall Street Journal ran an article with a similar headline, “AI Poses Risk of Extinction, Technology Executives Warn.” What makes the story interesting is not so much what’s in the story. What makes the story interesting is that in the print edition, it was found on page A six.

So let me just remind you what that means. That means that the editors, the news editors of The Wall Street Journal considered this story so important that they put it on page six, by the way, not even above the fold, but under the fold. In other words, the editors of this paper were at least communicating in one sense. “Look, we’re afraid the human race may go extinct. But there were five and a half other pages of issues we thought were actually more important.”

And just to put all that into perspective, in The Wall Street Journal, three pages earlier, we find a story with a headline, “California moves to ban some food additives.” And this just might mean that California will ban Skittles. And so Skittles made page three, human extinction made page six.

And so just to remind ourselves, the Christian worldview begins with God who is omnipotent, omniscient, all sovereign and in control, the cosmos that he created. Creation is the theater of his glory. Human beings were created as the only being made in his image, capable of knowing him, the only moral creature that truly knows the difference between good and evil.

The Christian worldview reminds us that God’s sovereignty is eternal. And so, his sovereignty applies just as much now as when he created the entire cosmos and made Adam in his image. And so the sovereignty of God is the first principle. God is in control, and that means the history will unfold exactly as God decrees and exactly as God has revealed in the scriptures. And that is to say, that Christians understand that the human race is not going to come to an end by some kind of disaster unleashed by artificial intelligence.

But the Bible also makes clear, the Christian worldview also makes clear, the human beings are capable of doing vast damage to our little planet on the cosmos, and even more importantly to ourselves. The biblical worldview also holds human beings responsible, and that includes being responsible for the technologies that we create and our use of those technologies as well. All of these come with consequences, and as Christians, we understand that all of these questions are both important and eventually unavoidable.

It should tell us something, indeed, it tells us a very great deal that here you have so many of the people developing these technologies who are basically now crying out to the government saying, “Save us before we do something horrible.” But one final thought, we need to recognize that human beings also give ourselves to dystopia from time to time.

Dystopia being the opposite of utopia. If utopia is the best of all possible worlds, dystopia is the worst of all possible worlds. These dystopian stories have often taken the form of film. Two released in 1964 at the height of the Cold War that were very much a part of the conversation of my childhood in teenage years. “Dr. Strangelove, or How I Learned to Stop Worrying and Love the Bomb, 1964.” Also, the far more serious movie, which was entitled, “Fail Safe.”

In both of those narrative tales, humanity had created a technology that would eventually mean the end of humanity, and that was, of course, nuclear weaponry. Similar warnings have been given about the population explosion that turned out to be actually an inverse of the actual problem we would face. We have a problem of too few, not too many babies, but then also right now, at least many of the claims made about climate change. They’re following a very similar pattern.

In raising this issue today. I want to stress that human beings have the moral responsibility for the stewardship of all that is within our power, including the development of these technologies. But I also want to point to the fact that there is enormous irony in a group of these technological innovators crying out to the government, “Save us before we innovate again. Save us from ourselves.” There are no doubt real issues here, but my point is these issues have to be understood in a Christian perspective, and that raises even larger issues. That’s just the way it works.

Thanks for listening to The Briefing.

For more information, go to my website at albertmohler.com. You can follow me on Twitter by going to twitter.com/albertmohler. For information on The Southern Baptist Theological Seminary, go to sbts.edu. For information on Boyce College, just go to boycecollege.com.

I’ll meet you again tomorrow for The Briefing.



R. Albert Mohler, Jr.

I am always glad to hear from readers. Write me using the contact form. Follow regular updates on Twitter at @albertmohler.

Subscribe via email for daily Briefings and more (unsubscribe at any time).