ROME — Last week the Vatican hosted a high-level discussion in the world of science, gathering experts to discuss the progress, benefits and limits of advances in artificial intelligence.
A new conference at the Vatican drew experts in various fields of science and technology for a two-day dialogue on the “Power and Limits of Artificial Intelligence,” hosted by the Pontifical Academy for Sciences.
Among the scheduled speakers were several prestigious scientists, including Stephen Hawking, a prominent British professor at the University of Cambridge and a self-proclaimed atheist, as well as a number of major tech heads such as Demis Hassabis, CEO of Google DeepMind, and Yann LeCun of Facebook.
The event, which ran from Nov. 30-Dec. 1, was hosted at the Vatican’s Casina Pio IV, the headquarters of the Pontifical Academy for Sciences, which is headed by their chancellor, Argentine Bishop Marcelo Sanchez Sorondo.
Werner Arber, a Protestant and president of the academy who works in the field of evolutionary biology, said that while artificial intelligence isn’t his specific area, it’s important for the Vatican entity to have a voice in the discussion, since their task is “to follow all actual developments in the field of natural sciences” in order to stimulate further research.
As far as the discussion on artificial intelligence is concerned, Arber said it’s important to understand current developments, which include increasing dialogue as to whether research done on natural sciences can then be applied to the field of machinery and robotics.
Part of the debate, he said, has been whether or not machines could eventually take on some of the work human beings have traditionally done. However, he cautioned that there would be some “social-scientific implications,” since this could eventually lead to less work for people.
This is “an ethical aspect, do we want that or not?” Arber said, noting that human beings have a unique thinking and problem-solving capacity, and “it’s not good” if this gets pushed too far to the side.
It’s a “very important task of our human life…so we have to be careful to preserve our duties,” he said.
Also present at the meeting was Demis Hassabis, CEO of British artificial intelligence company DeepMind, founded in 2010 and acquired by Google in 2014. He spoke on the first day of the conference about the possibility of moving forward “Towards Artificial General Intelligence.”
Part of Hassabis’ work involves the science of “making machines smarter,” and trying to build learning systems that allow computer systems to learn directly from data and experience in order to eventually figure out tasks on their own.
In comments to CNA, he noted how he has established an ethics board at the company to ensure that things don’t get out of hand while research is moving forward.
Artificial intelligence “is a very powerful technology,” he said, explaining that while he believes technologies in and of themselves are neutral, “it depends on what you end up using that technology for.
“So I think as a society we need to think very carefully about the ethical use of technologies, and as one of the developers of this kind of artificial intelligence technology we want to be at the forefront of thinking how to use it responsibly for the good of everyone in the world,” he said.
One of the ways his company’s work is currently affecting Google is through little things such as how to organize photos and recognize what’s in them, as well as the way a person’s phone speaks to them and the optimization of energy that Google’s data centers use.
Hassabis said he thinks it’s “really interesting” to see the wider Catholic community taking an interest in the discussion, and called the Church’s involvement a great way “to start talking about and debating” how artificial intelligence “will affect society and how we can best use it to benefit all of the society.”
Stanislas Dehaene, a professor of cognitive neuroscience at the College de France and a member of the Pontifical Academy of Sciences, was also present at the gathering, and spoke to participants on day two about “What is consciousness, and could machines have it?”
Dehaene told CNA that “enormous progress” has been made in terms of understanding the brain, and in part thanks to these advancements, great steps have also been taken in modeling neuro-networks which eventually lead “to superb artificial intelligence systems.”
With a lot of research currently being done on consciousness, Dehaene said a true “science of consciousness” has developed to the point that what happens to the brain when it becomes aware of a piece of information is now known “to such a point that it can be modeled.”
“So the question is could it be put in computers?” he said, explaining that this is currently being studied. He said he personally doesn’t know yet whether there is a limit to the possibilities for artificial intelligence, or what it would be.
However, he stressed that “it’s very important” to consider how further advances in artificial intelligence “will modify society, how far can it go and what are the consequences for all of us, for our jobs in particular,” he said.
Part of the discussion that needs to take place, Dehaene said, is “how to put ethical controls in the machines so they respect the laws and they respect even the moral laws” that guide human decisions.
“That is an extremely important goal that has not been achieved yet,” he said, adding that while he personally doesn’t have a problem with a machine making ethical judgments similar to that of a human being, the question “is how to get there” and how to make sure “we don’t create a system that is full of machines that don’t look like humans, that don’t share our intuitions of what should be a better world.”
Another major tech head present for the conference was Professor Yann LeCun, Director of Artificial Intelligence Research at Facebook.
What they try to do at Facebook is to “push the state of the arts to make machines more intelligent,” LeCun told CNA. The reason for this, he said, is that people are increasingly interacting through machines.
Artificial intelligence “would be a crucial key technology to facilitate communication between people,” he said, since the company’s main focus “is connecting people and we think that artificial intelligence has a big role to play there.”
Giving an example, LeCun noted that every day Facebook users upload around 1 billion photos and that each of them are recognized, and artificial intelligence systems then monitor the content of the photo in order to show users more images they might be interested in, or filter those they might object to.
“It also enables the visually impaired to get a textual description of the image that they can’t see,” he said, “so that is very useful.”
In terms of how this technology might transform the way we live, LeCun said that within the next few years or even decades, “there will be transformative applications” of artificial intelligence visible and accessible to everyone.
Self-driving cars, the ability to call a car from your smartphone instead of owning one, no parking lots and safer transportation are all things the LeCun said he can see on the horizon, with medical advances being another area of rapid growth.
“There are already prototype systems that have been demonstrated to be better than human radiologists at picking out cancerous tumors,” he said, explaining that this alongside a “host of other applications” are going to make “a big difference.”
When it comes to the ethics of the discussion, LeCun noted that there are both short-term and long-term concerns, such as “are robots gonna take over the world?”
“Frankly these are questions that we are not worried about right now because we just don’t have the technology that’s anywhere near the kind of power that’s required. So these are philosophical discussions but not immediate problems,” he said.
However, short-term debate points include how to make the artificial intelligence systems that already exist safer and more reliable.
LeCun noted that he has helped set up a discussion forum called “Partnership for AI” that was co-founded by Facebook, Google, Microsoft, Amazon and IBM in order to facilitate discussion on the best ways to deploy artificial intelligence.
Both ethical and technical questions are brought up, he said, noting that since it’s a public forum, anyone from different fields such as academia, the government, social scientists and ethicists are able to participate and offer their contributions.