[Editor’s note: Brian Patrick Green is Assistant Director of Campus Ethics Programs at the Markkula Center for Applied Ethics and faculty in the School of Engineering at Santa Clara University. He has a strong interest in the dialogue between science, theology, technology, and ethics. He has written and talked on genetic anthropology, the cognitive science of the virtues, astrobiology and ethics, cultural evolution and Catholic tradition, medical ethics, Catholic moral theology, Catholic natural law ethics, transhumanism, and many other topics. He blogs at TheMoralMindfield and many of his writings are available at his Academia.edu profile. He spoke to Charles Camosy about the ethical challenges posed by advances in artificial intelligence.]
Camosy: One can’t follow the news these days without hearing about artificial intelligence, but not everyone may know precisely what it is. What is AI?
Artificial intelligence, or AI, can be thought of as the quest to construct intelligent systems that act similarly to or imitate human intelligence. AI thereby serves human purposes by performing tasks which would otherwise be fulfilled by human labor without needing a human to actually perform the task.
For example, one form of AI is machine learning, which involves computer algorithms (mathematical formulas in code) being trained to solve, under human supervision, specific problems, such as how to understand speech or how to drive a vehicle. Often AI algorithms are developed to perform tasks which can be very easy for humans, such as speech or driving, but which are very difficult for computers. However, some kinds of AI are designed to perform tasks which are difficult or impossible for humans, such as finding patterns in enormous sets of data.
AI is currently a very hyped technology and expectations may be unrealistic, but it does have tremendous promise and we won’t know its true potential until we explore it more fully.
What are some of the most important reasons AI is being pursued so energetically?
AI gives us the power to solve problems more efficiently and effectively. Some of the earliest computers, like the ENIAC, were simply programmable calculators, designed to perform in seconds calculations that took humans hours of hard mental work. No-one would now consider a calculator to be an “AI,” but in a sense they are, since they replace human intelligence at solving math problems.
Just as a calculator is more efficient at math than a human, various forms of AI might be better than humans at other tasks. For example, most car accidents are caused by human error – what if driving could be automated and human error thus removed? Tens of thousands of lives might be saved every year, and huge sums of money saved in healthcare costs and property damage averted.
AI may also give us the ability to solve other types of problems that have until now either been difficult or impossible to solve. For example, as mentioned above, very large data sets may contain patterns that no human would be capable of noticing. But computers can be programmed to notice those patterns.
Altogether, AI is being pursued because it will offer benefits to humanity, and corporations are interested in that because if the benefits are great enough then people will pay to have them.
What kinds of problems might AI solve? What sorts of problems might it raise?
We do not yet know all the types of problems that we might be able to hand over to AI for solutions. For example, currently, machine learning is involved in recommendation engines that tell us what products we might want to buy, or what advertisements might be most influential upon us. Machine learning can also act much more quickly than humans and so is excellent for responding to cyber attacks or fraudulent financial transactions.
Moving into the future, AI might be able to better personalize education to individual students, just as adaptive testing evaluates students today. AI might help figure out how to increase energy efficiency and thus save money and protect the environment. It might increase efficiency and prediction in healthcare; improving health while saving money. Perhaps AI could even figure out how to improve law and government, or improve moral education. For every problem that needs a solution, AI might help us find it.
At the same time, for every good use of AI, an evil use also exists. AI could be used for computer hacking and warfare, perhaps yielding untold misery. It could be used to trick people and defraud them. It could be used to wrongly morally educate people, inculcating vice instead of virtue. It could be used to explore and exploit people’s worst fears so that totalitarian governments could oppress their people in ways beyond what humans have yet experienced.
Those are as-yet theoretical dangers, but two dangers (at least) are certain. First, AI requires huge computing power, so it will require enormous energy resources that may contribute to environmental degradation. Second, AI will undoubtedly contribute to social inequality and enriching the rich, while at the same time causing mass unemployment.
Could robots with AI ever be considered self-conscious? A kind of non-human person?
This is a subject of debate and may never clearly be answered. It is hard enough to establish the self-consciousness of other living creatures on Earth, so a much more alien entity like an intelligent artifact would be even more difficult to understand and evaluate. Establishing the self-consciousness of non-biological intelligent artifacts may not happen any time soon.
What almost certainly will happen in the next decade or so is that people will try to make AIs that can fool us into thinking that they are self-conscious. The “Turing Test,” which has now achieved near mythological status, is based on the idea that someday a computer will be able to fool a human into believing it is another human – is a goal of AI developers.
When we are finally unable to distinguish a human person from an intelligent artifact, should that change how we think of and treat the artifact? This is a very difficult question, because in one sense it should and in another it shouldn’t. It should because if we dismiss the person-like AI as merely simulating personhood then perhaps we are training ourselves towards callousness, or even potentially wrongly dismissing something that ought to be treated as a person – because if it was a really strong imitation we could never know if it had somehow attained self-consciousness or not.
On the other hand, I think there are good reasons to assume that such an ‘artefactual’ person simply is not a self-conscious person precisely because it is designed as an imitation. Simulations are not the real thing. It is not alive, it would not metabolize, it probably could be turned on and off and still work the same as any computer, and so on.
In the end, we have very little ability to define what life and mind are in a precise and meaningful sense, so trying to imitate those traits in artifacts, when we don’t really know what they are, will be a confusing and problematic endeavor.
Speaking specifically as a Catholic moral theologian, are there well-grounded moral worries about the development of AI?
The greatest worry for AI, I think, is not that it will become sentient and then try to kill us (as in various science fiction movies), or raise questions of personhood and human uniqueness (whether we should baptize an AI won’t be a question just yet), but rather whether this very powerful technology will be used – by humans – for good or for evil.
Right now machine learning is focused on making money (which can itself be morally questionable), but other applications are growing. For example, if a nation runs a military simulation which tells them to use barbaric tactics as the most efficient way to win a war, then it will become tempting for them to use barbaric tactics, as the AI instructed. In fact it might seem illogical to not do that, as it would be less efficient. But as human beings, we should not be so much thinking about efficiency as morality. Doing the right thing is sometimes “inefficient” (whatever efficiency might mean in a certain context). Respecting human dignity is sometimes inefficient. And yet we should do the right thing and respect human dignity anyway, because those moral values are higher than mere efficiency.
As our tools make us capable of doing more and more things faster and faster we need to pause and ask ourselves if the things we want to do are actually good.
If our desires are evil, then efficiently achieving them will cause immense harm, perhaps up to and including the extinction of humanity (for example, to recall the movie “War Games,” if we decide to play the “game” of nuclear war, or biological, or nanotechnological, or another kind of warfare). Short of extinction, malicious use of AI could cause immense harm (e.g. overloading the power-grid to cause months-long nation-sized blackouts, or causing all self-driving cars to crash simultaneously). Mere accidental AI errors can also cause vast harm, for example, if a machine learning algorithm is fed racially biased data then it will give racially biased results (as has already happened).
The tradition of the Church is that technology should always be judged by morality. Pure efficiency is never the only priority; the priorities should always be loving God and loving neighbor. Insofar as AI might facilitate that (reminding us to pray, or helping reduce poverty), then it is a good thing and should be pursued with zeal. Insofar as AI facilitates the opposite (distracting us from God, or exploiting others) then it should be considered warily and carefully regulated or even banned. Nuclear weapons should probably never be under AI control, for example; such a use of AI should be banned.
Ultimately, AI gives us just what all technology does – better tools for achieving what we want. The deeper question then becomes “what do we want?” and even more so “what should we want?” If we want evil, then evil we shall have, with great efficiency and abundance. If instead we want goodness, then through diligent pursuit we might be able to achieve it. As in Deuteronomy 30, God has laid before us life and death, blessings and curses. We should choose life, if we want to live.