Wednesday, May 2, 2018

Wednesday, May 2, 2018

The Briefing

May 2, 2018

This is a rush transcript. This copy may not be in its final form and may be updated.

It’s Wednesday, May 2nd, 2018. I’m Albert Mohler and this is The Briefing, a daily analysis of news and events from a Christian worldview.

Part I


Religious liberty is once again the loser amidst the secularization of western societies

It’s Wednesday, May 2nd, 2018. I’m Albert Mohler and this is The Briefing, a daily analysis of news and events from a Christian worldview.

As Western societies rapidly secularize, one of the big issues is the redefinition of religious liberty. Now, before we turn to the United States, we need to look to Europe because Europe as a civilization precedes the United States.

The experiment in self-government, known as the United States of America, was based only upon the framework of a European worldview, and that worldview was distinctively Christian.

Religious liberty grew out of the understanding that God had granted human beings the right of worship, and furthermore, the right of conscience not to be coerced. Most importantly, the fact that God had made us as spiritual beings, the only creatures made in his image, and thus, an inviolable right to worship comes with what it means to be a human being, not just a citizen.

When the United States formed its own understanding of religious liberty, most importantly, in what’s known as the First Amendment to the US Constitution, it did not claim to have invented religious liberty, nor to have granted religious liberty to citizens, but rather to have recognized a right that is described as religious liberty, a right that was granted by the creator.

Europe is considerably ahead of the United States when it comes to the hard edge of secularization, but what happens in Europe often comes to the United States, and that’s why we should pay particular attention to a development that happened last week in the European Courts.

Barbara Leonard reporting for Courthouse News tells us, “An advisor to Europe’s highest court offered recommendations last Thursday for adjudicating employment discrimination claims against a church-run charity.

Vera Egenberger brought her job application in late 2012 for a post that would last 18 months, preparing a report on Germany’s compliance with the United Nations International Convention on the Elimination of All Forms of Racial Discrimination.”

As Leonard reports, despite her many years of experience in this field, having written a range of relevant publications, Egenberger is not religious and saw this as the reason why the employer, an affiliate of the Protestant church in Germany, did not hire her.

Indeed, the application for the position said, “We require membership of a Protestant church, or of a church which is a member of the Arbeitsgemeinschaft Christlicher Kirchen in Deutschland …” that is the Cooperative of Christian Churches in Germany, “… and identification with the welfare mission.”

Applicants were asked to list their required church membership as a part of the application. But then Leonard reports, “Germany’s Labor Court found that Egenberger suffered discrimination, but the Federal Employment Court invited the European Court of Justice to weigh in, voicing uncertainty about the correct interpretation of EU law.”

Before going further, we should note the problem with the supranational courts. In this case, German courts went on to ask the European Court of Justice its own ruling consistent with the Constitution and statutes of the European Union, and the ruling of this court would effectively be higher than the ruling of Germany’s own court.

The EU official at the center of this story is Advocate General Evgeni Tanchev. He noted this is the first time this particular article of the EU constitution has been tested in this way. He went on to call it a balancing act.

He said it was, “Difficult to overstate …” in his words, “… the delicacy of balancing preservation of the right of the European Union’s religious organizations to autonomy and self-determination against the need …” he said, “… for effective application of the prohibition on discrimination with respect to religion and belief on the European Union’s ethnically and religiously diverse labor market.”

Now that’s rather technical and perhaps overly complicated language describing the balancing act. In this case, the advocate general said the balance is between the rights of Europe’s religious organizations to actually be religious, and the rights of the people who are the citizens of those nations not to suffer from what’s described as religious discrimination.

Now just consider the fact that here we’re talking about two principles that cannot be balanced. One is going to give to the other. These are two irreconcilable absolutes, either an absolute that there be no religious discrimination, or the absolute that religious bodies, churches and their denominations, should be free to hire on the basis of their religious identity and theological convictions. But as you would expect in this secularizing context, when one has to give to the other, it’s religious liberty in this case that was the loser.

The ruling in the end was this, as I quote from Courthouse News. “The advisor was unequivocal that the church employer here cannot authoritatively determine whether adherence by an applicant to a specified religion constitutes a genuine, legitimate and justified occupational requirement.”

Well, again, cutting through the language, what the advocate general here has decided is that churches and religious institutions in Europe cannot decide themselves whether or not adherence to the religion should be a requirement for employment.

The advocate general also instructed national courts, not only in Germany but throughout Europe, to follow his lead in understanding the nature of the balance. Again, the news release said that, “National courts must balance these rights …” that is religious liberty, “… to autonomy and self-determination against the rights of employees or prospective employees to be free from religious discrimination.”

Now, again, you simply can’t have it both ways. Religious discrimination in this case means the right of religious churches and organizations to hire according to their own convictions and beliefs. Specifically, in this case, this church-run charity was told that it cannot require those who will be even policymaking officials within the charity, to belong to the church or one of its affiliated churches.

The bottom-line summary of this particular ruling is made clear in the Courthouse News headline, “EU Court Advisor Cracks Whip on German Religious Employers.” So even in the headline, we are told who was the winner and who was the loser. The whip has been cracked on, “German Religious Employers.”

The headline in The Economist, one of the most influential British publications, was this, “A Court Ruling Makes It Harder For Faith-based Employers to Discriminate.” The subhead, “A curb on religious employers’ right to discriminate.” But the Erasmus column in The Economist … by the way, it is really ironic that the column is named for Erasmus, famously known for standing in the middle and not taking a clear position either way … the Erasmus column in The Economist begins by stating, “It is a problem that arises in every liberal democracy that upholds liberty of belief, and hence, the freedom of religious bodies to manage their own affairs, while also aiming to defend citizens, including job-seekers, from unfair discrimination. As part of their entitlement to run their own show …” said the column, “… faith groups often claim some exemption from equality laws when they are recruiting people.”

Now, like the other columns in The Economist, the author is not named. It’s simply Erasmus. But the column continues, “To take an extreme case, it would run counter to common sense if a church were judicially obliged to appoint a militant atheist as a priest, even if that candidate was well qualified on paper. But …” asks Erasmus, “… how generous should those exemptions be?”

Now, perhaps at this point we should almost enjoy the squirming we see within the article. You’ll notice that it begins in this paragraph with, “An extreme case, counter to common sense …” we are told, “… if a church were to be obliged by a court to appoint a militant atheist as a priest,” presumably it would be less problematic if the atheist were not militant, even if we are told that candidate was well-qualified on paper.

Well, let’s just consider for a moment, wouldn’t atheism be a disqualification even on paper? But here we should note that in the background to the so-called balancing act, is the fact that much of this balance is going to be in the hands of people who represent an administrative reality, those who are looking for what’s written on paper. And here we need to note that conviction can never be reduced to mere paper.

The scenario raised in this article now explicitly presents us with the scene of some kind of bureaucrat or court official somewhere coming to make a ruling based upon whether or not on paper a job description is sufficiently religious, or if an applicant would be sufficiently irreligious to be disqualified from the position, again, on paper.

The ruling by the advocate general, we are told, may not be binding on higher courts, but in most cases, in the European Court of Justice, the advocate general’s position is the eventual position taken by the court. And even in the meantime, this ruling is enough for the plaintiff, in this case in Germany, to go ahead with a suit against the religious employer for monetary damages.

But the most important impact of this ruling is the fact that now churches and religious employers in Germany and throughout Europe have been told that they themselves are no longer in the position of determining the positions within their employment for which religious belief and adherence and church membership will be required.

Now, that might appear to be a mere bureaucratic ship, but it’s not because in this case, this means that someone else, and that means a government representative, is going to be deciding for churches and for church employers in Germany and throughout Europe, what position does and what position does not allow for that kind of religious requirement.

I guess it’s supposed to be cold comfort of some sort that the Erasmus column in The Economist has told us that requiring a church to hire a militant atheist as a pastor or priest would be a violation of common sense. But notice just how extreme that illustration turns out to be upon reflection. Supposedly then it wouldn’t be so much a violation of common sense, for perhaps a lesser atheist, or for that matter someone in a lesser position, might be imposed upon a church simply because it would not violate secular common sense here.

Here we should note, and this is very important, that in 2012, a unanimous ruling handed down by the Supreme Court of the United States in the case known as Hosanna-Tabor Lutheran Church and School. That unanimous decision affirmed the right of Christian institutions, most importantly, schools and Christian religious organizations, to determine which positions did involve the teaching of doctrine and thus the responsibility and right of the church or religious organization to decide who should and who should not be hired.

Yes, this took place in Europe, but thinking back to the headline at Courthouse News, that crack of the whip against religious employers in Europe should be heard clearly and heard loudly here in the United States.



Part II


Why Artificial Intelligence is incapable of driving us toward a better system of morality

But next, it’s really interesting and important to recognize a growing discussion concerning the morality of robots and artificial intelligence. The argument seems to be coming from two or three different directions.

First of all, at Quartz, Ambarish Mitra. He is the CEO of Blippar, which is a company committed to augmented reality and computer vision. He wrote an article entitled We Can Train AI … that’s artificial intelligence … to Identify Good and Evil and Then Use It to Teach Us Morality.

Now, there is so much that commands our attention in this article, but Ambarish Mitra begins by asking a question, “When it comes to tackling the complex questions of humanity and morality, can AI make the world more moral?”

It’s really important to see how Mitra begins his article. He states, “Morality is one of the most deeply human considerations in existence. The very nature …” he says, “… of the human condition pushes us to try to distinguish right from wrong, and the existence of other humans pushes us to treat others by those values.”

Now, that’s a profoundly true statement but the question is, from whence does that existence dimension come, that very drive to know the difference between right and wrong? How does the nature of human condition appear? Well, as it turns out, it’s pretty clear that Mitra holds that it’s somehow the product of evolution and human historical experience.

He says, “What’s good and what is right are questions usually reserved for philosophers and religious or cultural leaders, but …” he says, “… as artificial intelligence weaves itself into nearly every aspect of our lives, it is time to consider the implications of AI on morality and morality on AI.”

Now he points to the fact … and you remember, he’s the CEO of a major company in this area … he says that moral issues are now at the center of concern in artificial intelligence. Should AI technologies be directed towards the development of morality in those technologies of artificial intelligence? That is to say, should it be programmed into what would eventually become robots or other forms of artificial intelligence? He asked the question.

For example, how should a how should a self-driving car handle the terrible choice between hitting two different people on the road? He says that would be an interesting question, but the question presupposes, “That we’ve agreed on a clear moral framework.” He goes on to say, “Though some universal maxims exist in most cultures, don’t murder, don’t steal, don’t lie, there is no single perfect system of morality with which everyone agrees. But …” he says, and here’s the tantalizing teaser of his article, “… but AI could help us create one.”

He goes back to 1986, where the legal theorist, the late Ronald Dworkin, in his book Law’s Empire propose the idea of a super knowledgeable virtually omniscient human judge. He named this judge Judge Hercules. This imaginary and as Mitra says idealized jurists, would be able to understand not only all the complexities of the law in all of its relevant parts, but would also be able to understand every possible impact of a decision in order to achieve the absolute best, most right, most perfect judicial decision.

Now, Judge Hercules doesn’t exist, but what Mitra is proposing is that perhaps Judge Hercules could turn out to be a robot, a form of artificial intelligence. Perhaps, he goes on to argue, our own moral confusions could be solved by the very technologies that we may create.

But Christians need to pay extremely close attention to a following paragraph where he writes, “Let us assume that because morality is a derivation of humanity, a perfect moral system exists somewhere in our consciousness. Deriving that perfect moral system should simply therefore be a matter of collecting and analyzing massive amounts of data on human opinions and conditions and producing the correct result.”

Now, if you or I were to hold to a materialist, naturalistic, technologically-driven kind of worldview, then this might appear to be not only a possibility, but a moral mandate. If indeed morality is just something that exists in human consciousness, but every single human being is finite, if you were to add together all the moral wisdom that would be held by all the human beings, and if that human intelligence could be expanded by means of an artificial intelligence, then from the creation of an entity by human creatures might come a moral reality that could be even more moral than the human creators who made it.

Clearly, Mitra is excited by the possibility. He asked this, “What if we could collect data on what each and every person thinks is the right thing to do? And what if we could track those opinions as they evolve over time and from generation to generation? What …” he asks, “… if we could collect data on what goes into moral decisions and their outcomes? With enough inputs …” he says, “… we could utilize artificial intelligence to analyze these massive data sets, a monumental if not Herculean task, and drive ourselves toward a better system of morality.”

Now, at the end of the day, Mitra’s excitement should be our understanding of exactly where a secular worldview must inevitably lead. Some kind of moral rescue we should note, is understood to be necessary. Human beings are very frail and fragile when it comes to making moral decisions. Some people insist on making the wrong decisions. And even the most wise and seasoned human court will sometimes make wrong decisions. And no human judge, or for that matter, no human moral agent can come to a full understanding of the consequences of every decision we might make or every action we might take.

So, if indeed we’re holding to this kind of secular worldview, which tells us that human beings are just cosmic accidents and that morality is merely a well-developed but almost universal human dimension, then we can understand why we might hope to be able to create some kind of artificial intelligence that could save us from ourselves, even coming to a higher morality than is ours.

But you’ll notice the circular reasoning that is apparent here once you think about it. Human beings after all, are here credited in this argument as being the very source of moral wisdom, but human beings are also here credited with the promise of creating artificial intelligence. And the artificial intelligence will only have as input, even morally speaking, the moral wisdom of human beings.

So it turns out that this isn’t so much a new moral entity as it is some kind of technology that would just accumulate human moral wisdom. And that’s where the Christian worldview comes in to remind us that human beings are not accidents, that we are not cosmic incidents that simply happen. We are the creatures of a loving God who made us in his image as moral creatures. And we also come to understand that it is sin, and the effects of sin that keeps us from perfect moral judgment. But it is also true that it is the creator rather than the creature who is omniscient, and adding up all human wisdom, not only morally but in any other dimension, wouldn’t actually make us omniscient in the end. Judge Hercules doesn’t exist except for the fact that God exists.

The saddest part of this entire circular quandary is the fact that artificial intelligence would only exist by the human agency and invention of such artificial intelligence. And at the end of the day, that artificial intelligence can be no more intelligent than we are intelligent.



Part III


Conscious machines, cruelty, and conventional morality: Confronting the ethics of HBO’s ‘Westworld’

But then finally, that leads us to a different article on the same theme. This one appeared at The Stone column of The New York Times. It’s by Paul Bloom and Sam Harris. The headline, It’s Westworld. What’s Wrong … asked the headline … With Cruelty to Robots?

Bloom and Harris wrote, “Suppose we had robots perfectly identical to men, women and children, and we were permitted by law to interact with them in any way we pleased. How would you treat them?” Well, they go on to say, “That is the premise of Westworld, the popular HBO series.” It just recently began a second season. According to Bloom and Harris, “It raises a fundamental ethical question we humans in the not-so-distant future are likely to face.”

It speaks of the entire plot of Westworld, which is a mix of human beings and robotic hosts as they are known, who look and act and sound just like human beings, so much so that humans and the hosts are confused in the program. But then Harris and Bloom go on to say that it’s not a spoiler to argue that it doesn’t go well for humans in this scenario, but it doesn’t go well in one sense, because humans turn out to be cruel, and in this case without going into detail, horrifyingly cruel to the robots.

They then say this, “The biggest concern is that we might one day create conscious machines, sentient beings with beliefs, desires and, most morally pressing, the capacity to suffer. Nothing seems to be stopping us from doing this,” they said. “Philosophers and scientists remain uncertain about how consciousness emerges from the material world, but few doubt that it does. This suggests that the creation of conscious machines is possible.”

Well, this is where those who are committed to a Christian biblical worldview have to step back for a moment and say we don’t believe that the creation of truly conscious machines is possible. Furthermore, we don’t really believe that the term “Conscious machines” is going to turn out to be a true description.

At the very beginning of the article, Bloom and Harris asked, “Suppose we had robots perfectly identical to men, women and children …” well, that’s where Christians will have to respond. They might look just like men, women and children, they might even sound just like men, women and children, but in their total constitution, and in their reality, they would not be just as men, women and children. Why? Because men, women and children are not cosmic accidents. We are not free agents in the cosmos. We were created, and it is the creator who made us in his image, and who made us conscious, sentient, moral beings.

Now, the background about Bloom and Harris, Sam Harris of course is one of the famous, or infamous Four Horsemen of the new atheism. The very background worldview here is the fact that human beings are simply a product of evolution. And furthermore, both Bloom and Harris hold to a rather absolute materialism in the sense that whatever consciousness is, as we saw in that final paragraph that I cited a moment ago, it has to have emerged merely from the material, and that’s exactly the opposite of what the Bible teaches.

Bloom and Harris go on to write, “If we did create conscious beings, conventional morality tells us that it would be wrong to harm them, precisely to the degree that they are conscious and can suffer or be deprived of happiness.” They go on to say, “If would be wrong to torture these robots, or to have children only to enslave them. It would be wrong …” they said, “… to mistreat the conscious machines of the future.”

But notice just how flimsy, how thing their argument is. First of all, it begins with a big pretend. Let’s pretend that we are able to create conscious beings. But they then say, and these words are incredibly important, “Conventional morality tells us it would be wrong to harm them.” Well, what’s this conventional morality? You will note that conventional morality makes no claim to moral absolutes. There is no absolute right or absolute wrong. It’s merely conventional judgment that it would be wrong. Let’s be very thankful that human dignity doesn’t depend upon protection from conventional morality.

The biblical worldview tells us that being made in the image of God as spiritual beings mean that we are spiritual. It did not merely emerge as some kind of consciousness out of matter, or the material world, and thus, even though artificial intelligence may be a very sophisticated technology far beyond our imagination now, we really don’t have to worry according to Scripture, that there is even the possibility of material machines developing a soul or a spiritual reality. Human beings did not create that. Instead, we were created as that. That’s a fundamental distinction.

But before leaving this, we need to recognize that Bloom and Harris have asked a legitimate moral question about human behavior. Let’s just limit this behavioral question, this moral question to real human beings, who are also pictured in Westworld, and who would be those who would create and supervise and run artificial intelligence. Would it be moral for human beings to act immorally towards robots?

They point to the fact that Enlightenment philosopher Immanuel Kant didn’t believe that animals were moral beings, but the human beings should be kind and considerate to animals because we are moral beings. Now, oddly enough, that points to almost exactly the right argument here. It’s not so much that it will be morally wrong to cause harm or cruelty to a machine because a machine would not be a morally sentient conscious being, but rather it will be wrong for human beings to act that way because of what it would mean morally for human beings, even the desire, or the enjoyment, or the fulfillment, or the experience of being cruel, even in theory, to a machine.

In that case, according to the Christian worldview, the big moral issue would be not so much what are we doing to machines, but in such cruelty demonstrated even in imagination, the question would be, what are we actually doing to ourselves?

Thanks for listening to The Briefing.

For more information go to my website at albertmohler.com. You can follow me on Twitter by going to twitter.com/albertmohler. For information on the Southern Baptist Theological Seminary go to sbts.edu. For information on Boyce College, just go to boycecollege.com.

I’m speaking to you from Tampa, Florida, and I’ll meet you again tomorrow for The Briefing.



R. Albert Mohler, Jr.

I am always glad to hear from readers. Write me using the contact form. Follow regular updates on Twitter at @albertmohler.

Subscribe via email for daily Briefings and more (unsubscribe at any time).