A group of prominent Catholic thinkers has joined a court battle over the ethical use of AI, in a case that has garnered the attention of business leaders, constitutional scholars, politicians, and the broad public.
San Francisco-based Artificial Intelligence developer Anthropic PBC has taken the U.S. Department of War to court over the Pentagon’s efforts to punish the company for refusing to allow the U.S. military use of Anthropic technology for mass surveillance of U.S. citizens or for lethal autonomous weapons systems – purposes the company currently finds unacceptable.
Experts in law and policy across the spectrum of opinion agree that the government – not a government contractor – gets to decide what usage restrictions to place on available tech and even on what tech to develop, if and when the government can find a willing partner.
Anthropic does not want to be the government’s partner on the terms the government is demanding, however, and now the government is attempting to punish Anthropic for its refusal to continue partnering on the government’s terms.
In the view of the tech giant, those efforts violate Anthropic’s First Amendment right to expression and the company’s Fifth Amendment right to due process, and exceed the scope of other pertinent statutory law.
Anthropic has filed two lawsuits, one in the U.S. District Court for the Northern District of California and another in the federal appeals court in Washington, D.C., alleging the Trump administration violated the company’s First Amendment rights and improperly applied a “supply chain risk” designation to Anthropic – a label usually reserved for foreign adversaries.
Now, fourteen Catholic thinkers have joined an amicus curiae – “friend of the court” – brief filed last Friday with the U.S. District Court for the District of Northern California, on the side of Anthropic.
Prelude to a crisis
Lethal autonomous weapons systems – LAWS, sometimes called “killer robots” – have been controversial for generations. Basically, they are weapons systems that use a combination of sensor suites and advanced digital algorithms to identify targets and engage them without human control.
LAWS have been controversial for generations, and the Holy See has opposed their development in various international forums for the better part of two decades.
The U.S. already has a range of semi-autonomous systems in its arsenal, but all of them require a “human in the loop,” i.e., a real flesh-and-blood person to select targets and issue commands.
Recent changes to U.S. defense policy and pressure from the break-neck pace of the international arms race, however, have made both developers and observers extremely wary of ongoing U.S. efforts to develop, introduce, and deploy fully autonomous weapons.
RELATED: New US policy on AI threatens industry disruption, puts US at loggerheads with Holy See
“Frontier AI systems are simply not reliable enough to power fully autonomous weapons,” said Anthropic co-founder and CEO Dario Amodei in a statement late last month, as the dispute with the Pentagon was coming to a head.
“We will not knowingly provide a product that puts America’s warfighters and civilians at risk,” Amodei said.
Amodei also raised the alarm regarding mass domestic surveillance.
“Powerful AI,” he said, “makes it possible to assemble this scattered, individually innocuous data into a comprehensive picture of any person’s life—automatically and at massive scale,” which would create a state of affairs incompatible with free society.
“Anthropic, in the red lines it has drawn for the use of its products on domestic mass surveillance and autonomous weapons systems, sought to uphold minimal standards of ethical conduct for technical progress,” the group of fourteen Catholic thinkers say in the amicus brief.
“When technology is capable of violating life, dignity, and freedom,” they say, “it is reasonable to draw clear boundaries around its use.”
“Those boundaries reflect caution, not defiance,” the Catholic thinkers say in the amicus brief, “and responsibility rather than obstruction.”
“War is a human activity,” moral theologian Charles Camosy of the Catholic University of America told Crux Now, “one which requires direct human oversight.”
“Deadly actions in war, therefore, require human beings to be the ones morally responsible – and to take moral responsibility – in order for actions in a war to be just,” Camosy said.
Loyola University of Chicago philosophy professor Joseph Vukov, spoke with Crux Now about the practicalities involved in the moral calculus.
“By shifting lethal decision-making from humans to machines,” Vukov said, “LAWS make the assignment of moral responsibility murky.”
“If no human is involved in a poor decision made by LAWS,” Vukov asked, “whom do you blame?”
Camosy and Vukov were two of four principal authors of the Catholic thinkers’ statement to the court.
In their amicus brief, the signatories framed the issue as “a narrow but consequential dispute about whether a developer of advanced AI systems may maintain principled limits on certain uses of its technology – specifically, lethal autonomous weapons and mass surveillance of Americans.”
The problem with LAWS and mass surveillance
Current U.S. law does not prohibit the development or use of LAWS.
Current U.S. defense policy requires that all systems – LAWS included – be designed in a manner that allows “commanders and operators to exercise appropriate levels of human judgment over the use of force.”
As noted in an official U.S. government report from 2018, however, “appropriate” is a word doing a lot of work.
The 2018 white paper says “‘appropriate’ is a flexible term that reflects the fact that there is not a fixed, one-size-fits-all level of human judgment that should be applied to every context.”
“What is ‘appropriate’ can differ across weapon systems, domains of warfare, types of warfare, operational contexts, and even across different functions in a weapon system,” the paper noted.
When the government asked to use Anthropic’s Artificial Intelligence models for “all lawful purposes” without exception, the company offered to permit all lawful uses except mass surveillance and LAWS.
What constitutes surveillance in the strict legal sense is itself a thorny issue, with most laws currently in place having been enacted years and even decades before the dawn of AI.
“Surveillance,” explained Dean Ball – a senior fellow at the Foundation for American Innovation and author of the Hyperdimensional newsletter, who also served briefly in 2025 as a senior policy adviser on AI and emerging tech for the Trump White House – “is the collection or acquisition of private information, but that doesn’t include commercially available information.”
“If you buy something,” Ball said – he was speaking on The Ezra Klein Show of The New York Times – “if you buy a data set of some kind and then you analyze it, that’s not necessarily surveillance under the law.”
“There’s a lot of data out there,” Ball went on to say, “there’s a lot of information that the world gives off – your Google search results, your smartphone location data, all these things.”
“The reason that no one really analyzes it in the government,” Ball continued, “is not so much that they can’t acquire it and do so,” but “because they don’t have the personnel.”
Said simply, deploying AI could go a long way toward solving that problem.
“The problem with AI is that AI gives them that infinitely scalable work force,” Ball told The Times’ Klein. “Thus,” he said, “every law can be enforced to the letter with perfect surveillance over everything.”
“And that’s a scary future,” he said.
Speaking to Klein, Ball also confirmed that impasse over mass surveillance played a crucial role in the collapse of negotiations between the government and Anthropic, a fact widely reported by the March 6 episode of The Ezra Klein Show on which Ball appeared and strongly suggested by Amodei’s February 26 statement.
We support the use of AI for lawful foreign intelligence and counterintelligence missions,” Amodei said. “Using these systems for mass domestic surveillance,” however, “is incompatible with democratic values,” he continued.
“AI-driven mass surveillance presents serious, novel risks to our fundamental liberties,” Amodei stated. “To the extent that such surveillance is currently legal,” he said, “this is only because the law has not yet caught up with the rapidly growing capabilities of AI.”
The Catholic thinkers’ objections to mass surveillance – and their support of Anthropic in the case against the government – are rooted in the right to privacy, which is not an absolute or unfettered right.
It is nevertheless a real right requiring proper understanding and careful protection in any society characterized by ordered liberty.
“Privacy is not an absolute right in Catholic teaching nor in the more general theological and philosophical frameworks,” that the Catholic thinkers who have written as friends of the court endorse.
“Yet mass surveillance by the Department of War clearly oversteps privacy as described in Catholic thought, and would, more generally, amount to a clear violation of human dignity,” they say.
Government retaliation
Usually, when a company cannot provide the government with a product or service the government wants, the parties either compromise or part ways.
Last month, however, the Pentagon took the extraordinary step of designating Anthropic a “supply chain risk” – a move usually reserved for foreign adversaries suspected of posing a security risk to the United States.
Anthropic is the first U.S. company to receive the supply-chain-risk designation.
Anthropic’s Claude AI, however, is already integrated into many U.S. military systems, pursuant to a $200 million contract the government has now voided.
The Pentagon has acknowledged it will take months to disentangle and extricate Claude from U.S. defense systems.
In addition, the Pentagon’s enforcement of the supply-chain-risk designation could present a stark choice to any company currently doing business with Anthropic or considering business opportunities with the tech giant: Either cut ties with Anthropic and do business with the U.S. military, or else do business with Anthropic and forego business with the Pentagon.
Brian Boyd, another principal author of the amicus brief, who teaches and consults with several prominent institutions including the Institute for Advanced Catholic Studies at the University of Southern California, told Crux Now he feels that filing the brief was a matter of responsible citizenship.
“When an imperfect corporation is willing to forego profit and undertake risk in order to stand up for basic principles of prudence in combat and privacy for American citizens, and is threatened by the government for doing so, everyone who is in a position to speak up in their defense ought to do so,” Boyd said.
Imperfect alignment and common cause
In their brief, the Catholic thinkers acknowledge that the specific exclusions for which Anthropic asked were a reflection of “the company’s technical judgment that current AI systems are not yet sufficiently reliable, interpretable, or controllable[.]”
The systems cannot yet “be entrusted with decisions that directly take human life without human oversight, or to conduct population-scale surveillance in environments where errors, bias, or misuse could cause irreversible harm.”
The fourteen Catholic friends of the court do not believe such systems can ever be trusted with such or similar decisions.
“Overall,” said Santa Clara University technology ethicist Brian Green, who also joined and authored the amicus brief, “Anthropic takes ethics very seriously and, from what I can tell, is doing a good job with making their AI ethical.”
“However,” Green said, “Anthropic does not completely reject the possibility of their AI being used for LAWS in the future, while the Catholic Church rejects LAWS completely.”
“This is a tension which means that at some point our two organizations may no longer agree on this topic,” he said, “but for now our visions align, so we should make common cause.”













