Skip to content

Since the introduction of Chat GPT late last year, artificial intelligence has become pervasive in modern American life. Artificial intelligence today is used in everything from advertising, fraud prevention, and space exploration to data analysis, health care, and even entertainment. 

AI-generated robot. Credit: Pillar Media

But as AI weaves its way ever more intimately into our lives, it brings with it questions of philosophy, anthropology and ethics.

What would happen if an AI became conscious? What would it take for it to become self-aware? And could an AI ever be more than an AI — could it be a person?

It’s clear that the AI moment is not ending any time soon. So The Pillar asked a handful of experts in theology and philosophy how the Church and the world can think (better) about this topic.


Is AI a person? Could it become a person?

The Catholic Church has long thought of non-humans like angels and demons as persons, so there is nothing limiting the concept to the species Homo sapiens.

But could an AI ever be a person?

Answering that question requires a clear definition of personhood. While precise definitions may vary, one of the key philosophical criteria for personhood is often consciousness. 

Brian Green, a Catholic moral theologian who serves as director of technology ethics at Santa Clara University, told The Pillar that AI is not conscious now, and can not become conscious in the future. 

“AI is not even alive,” he said. “And there is no reason to believe that consciousness can exist in non-living beings.”

Green warned that AI developers are going to try their best to make AI seem to act like a conscious person would.

“But we should not be fooled,” he warned.

Johnny 5 Is Still Alive with Fan-Made Short Circuit LEGO Set
Famous robot Johnny 5. public domain.

Kristin Collier, clinical associate professor and director of the Program on Health, Spirituality, and Religion at the University of Michigan School of Medicine, described the idea that AI could be a person as “nonsense spun by the AI venture capitalists.” 

She told The Pillar that the Christian faith teaches that “self” and “person” are attributes that do not properly belong to the category of “things.” 

Living creatures may share attributes like “self” or “person,” hence the dignity attributed to animals — but “no ‘thing’ will ever have selfhood or personhood because of the ontological divide that exists between living creatures and things,” she said. 

“A daisy is more human than the most sophisticated AI machine will ever be.”

Mariele Courtois, assistant professor of theology at Benedictine College told The Pillar that “a person is capable of receiving a gift of grace and, in turn, offering self-gift.” 

Furthermore, on account of the possession of a spiritual soul, “the action of a person is revelatory of an inner life that weighs, assesses, and commits to values.” 

Because people program AI to meet our demands and adhere to given restrictions, it is not at all like a person, she said.

The confusion surrounding AI and personhood arises not only from an overestimation of AI, but from a reduction of the human person, said Joe Vukov, associate professor of philosophy at Loyola University Chicago and author of a forthcoming book, “Staying Human in an Era of Artificial Intelligence.”

“AI does successfully replicate certain aspects of human intelligence,” Vukov noted. So if one holds a view of human nature that reduces sentient human life to mere intelligence, then it is understandable that AI can be seen as a person. 

But if people have the correct understanding of themselves as persons, Vukov insisted, “AI’s ability to replicate intelligence needn’t lead us to the conclusion of AI sentience or personhood. What makes us human in the first place is not our individual capacity for intelligence.” 


A theological lens

If AI cannot be a person like, is that the end of the ethical discussion?

AI-generated. Credit: Pillar Media.

Not exactly, says Luis Vera, associate professor and theology department chair at Mount St. Mary’s University. 

“We need an ethic for relating properly to robots, full stop,” Vera insisted.

Even if AI-driven robots are not conscious persons, Vera believes that treating them virtuously is “part of an ethic of our proper relationship to God's creation.”

In addition, while treating AI-powered robots “like dirt” might help remind people that machines aren’t persons, Vera worries that “this might corrupt our ability to treat real persons with justice and charity.”

For Courtois, people can only understand humanity’s proper relationship with AI if they have a proper theological foundation. 

“Our entire human anthropology is based upon theology,” she said.

Courtois argued that “technology embeds us in structures and habits that shape our customs, perspectives, and personal character.”

“Amidst such a rapidly advancing technological trajectory and a race to deliberate the ethics along the way, we cannot forget the need to slow down and delight in our littleness before the greatness of God,” she said.

Paul Scherz, associate professor of religious studies at the University of Virginia, suggested that Catholic social teaching offers a set of tools to both affirm what is good in social changes, including those coming from AI, but also criticize what is bad.

Green agreed that Catholic social teaching is a helpful lens for viewing the present moment.

In many ways, Green said, Catholic social teaching is really “Catholic technology teaching,” because much of it was developed in response to new technologies changing society during the industrial revolution. 

Technological progress is not a bad thing in itself, but humans are the ones responsible for making sure that technology - including AI - is used for good instead of evil, he said.

“People of faith are going to have to make it happen by steering this technology towards its better uses and away from its worse ones.”

Vukov argued that the present moment actually poses an opportunity for people of faith.

He told The Pillar that the AI moment “should be thought of as an evangelical moment for Catholics. One in which we can show that our understanding of the human person is much more satisfying than modern, secular rivals.” 

Subscribe now

Encounter and trust

Questions of ethics surrounding AI have also drawn the attention of the Vatican. In 2019, the Center for Digital Culture at the Pontifical Council for Culture sponsored discussion groups on AI. The discussions culminated in a book-length study titled Encountering AI: Ethical and Anthropological Investigations

AI-generated. Credit: Pillar Media.

Green, who was a partner in the working groups, said the goal was “to get ahead of some of the major disruptions that AI is likely to cause, not only to education, but also with our human self-conception.” 

Jordan Wales, associate professor and Kuczmarkski chair of theology at Hillsdale College, joined a number of others (including Green, Vera, Courtois, and Scherz) in the working group, which he said is aimed at “assisting the Church in engaging members of industry, government, and other areas of society with an informed Catholic perspective.” 

Many of the working group participants are trained in disciplines beyond theology and philosophy - for example, engineering, psychology, cognitive science, and genetics - bringing a robust range of experience to the discussions.

Vera noted that while discussion and publication of the book are being sponsored by the Vatican, the group is “not officially representing the Vatican.” 

This allowed the working group, which welcomed some members who are not Catholic, to “have enough freedom to explore questions where there is more controversy or less consensus,” he said, adding that the conversations were “sometimes intense, but also consistently fruitful.”

The working group sought to examine the theological and philosophical foundations from which to examine questions surrounding AI. Among the topics they considered was a theme often raised by Pope Francis - the notion of encounter.

“Relationships are one of the most core parts of human identity,” said Green. 

“When storytellers imagine AI they often imagine a relationship with AI, whether professional, friendship, or even romantic.”

As a result, the group placed a particular focus on relationships, both human and machine.

The publication, he said, “comes to a definite conclusion: AI should not be allowed to replace humans in our relationships.” 

Wales told The Pillar that a theme of encounter “gets down to the root of our own personhood, whether it be our encounter with other human beings in a world shaped by AI-driven automation or whether it be our encounter with apparently personal AI devices that…will subtly reshape the ‘personal’ environment in which we understand ourselves and live out our personal lives.” 

Trained in historical theology, Wales pointed to the transformation in the understanding of the human person that was brought about by Christianity.

In the ancient Roman view, said Wales, the human being's value was determined by his or her fulfillment of service to the city. 

But Christian faith in the Trinity re-visioned the human person as a relational individual who flourishes most fully in compassionate relationships and authentic encounters. 

“All that we do, all with which we interact,” he said, “shapes the actualization of our own self-gift.” 

There are also questions of trust surrounding human experience with AI, and these are questions that Vukov believes will only grow more acute as AI becomes more pervasive in society. 

“How can we trust the news media, when articles and images can be generated by AI? How can we trust the words of a politician, when those words could be an AI deep fake? How can we trust that the voice on the phone is human- and not AI-generated?” he asked.

Once these seeds of distrust are sown, he noted, they are nearly impossible to uproot.

Still, AI is not only a hindrance to authentic human encounter, Vukov said. 

He highlighted a video in which an AI-enabled device allowed a woman with paralysis to communicate better than she had been able to communicate in years. 

“She talks with her husband about the Blue Jays,” said Vukov. 

“Their conversations are genuinely moving. It’s also clearly an example of AI being used to enable genuine encounter between people rather than undermine it.”

Leave a comment

While AI’s potential impact on modern society is vast, there are certain fields that are already experiencing profound shifts as a result of AI. 

The Pillar asked the experts in theology and philosophy comment on a few of them:

Artistic creation

AI-created images, videos, and texts are now increasingly found in artistic spaces. This phenomenon has been met with mixed responses - some people are alarmed by the way that these generative AI can produce such creations, while others see such creations as an acceptable and even positive development.

AI-generated. Credit: Pillar Media.

Vera warned against a certain philosophy which holds that it is desirable to have AI take over the types of “more basic/boring/tedious lower-level habits and capacities [that] can be outsourced to a machine without cost.” 

This view, Vera said, presupposes that creativity happens only at the “top” of our intellectual activity, and that handing more basic activities to AI will free up humans for creative endeavors. 

However, he said, that is not how creativity actually works, if people have the correct understanding of the embodied human person in an encounter with the real world of God’s creation. 

The “basic boring tasks” are “usually crucial for acquiring the habits we need for profound and fruitful creativity, which necessarily involves a deep confrontation with reality,” he said. 

But given the way generative AI is being used now, Vera believes it will be difficult to avoid the sense that true artistic creativity can come from even fewer people than it does now. He pointed to the way that the phonograph and other recorded music changed “normal people's” dispositions for playing their own musical instruments. 

High art may preserve a sense of what real creativity requires, but for a lot of people, the ability to create or even to recognize attention-deserving art will thin out, he predicted.



For teachers, AI may seem like a tool for plagiarism. More subtly, AI also carries the threat of molding young people, as Vera put it, into obedient consumers and producers, within a dehumanizing economy. 

AI-generated. Credit: Pillar Media.

These trends are obviously bad and should be resisted, Vukov said. But the rise of AI also offers a chance to reflect on what education is for in the first place.

Rather than simply lamenting how students can now produce boilerplate essays at the click of a button, Vukov suggested that society could instead “take it as a moment to reflect: should we be training students to produce boilerplate essays in the first place? The answer should be obvious. No, we shouldn’t.”

Education, he said, “should be encouraging our students to become critical thinkers, thoughtful citizens, prayerful adults, and conscientious neighbors.”

“We want our students to fall in love with learning, to gain an appreciation of art and music and prose and technology. We want them to identify connections between the disciplines, to use their knowledge as a positive force in the world, to explore the ways in which their learning can affect their own lives.” 

AI cannot accomplish any of these things, he said, so it cannot undermine education, properly understood. Rather, the challenges it offers can push for a rediscovery of the kinds of education that should have been delivered all along.

Upgrade your subscription


Another major question raised by AI comes in the realm of sex. The creation of AI-generated pornography has become a controversial subject, but one that could revolutionize an already tremendously lucrative industry.

AI-generated. Credit: Pillar Media.

Wales told The Pillar that “the experience of sexual intimacy is so intoxicating to humans not only because it feels great but because the feeling is experienced as an affirmation of one's very being.”

As Christians understand, he said, “we exist as persons by having our being from another (God) and our lives in relationships that depend on one another (neighbor).” 

Because sex, properly understood, is total self-gift, all forms of pornography are distortions of the fundamental nature of sex, Wales said. And this distortion could reach new levels if the experience of sex can be had with a consumer robot lacking interior experience of its own. 

If this happens, he said, “we will have accomplished the experience of self-making, the sort of thing that many of the medievals claimed the devil desired: to be self-constituted. The robot will give us a mirror of what we want or think we want; as an extension of our wills it will give us the experience of affirmation from another—but without the risk of belonging to another.”

In short, said Wales, this could become the ultimate personal idol - and could transform society’s understanding of sex and its purpose.

Give a gift subscription


Open AI’s ChatGPT has passed medical licensing exams, but even that dramatic development—according to Stanford's School of Medicine dean Lloyd Minor—will pale in comparison to what comes next in medicine — because of AI.

AI-generated. Credit: Pillar Media.

“What we did in the last 100 years, we’ll achieve in the next 10 years, or even five years,” he recently predicted. Given that the last century produced everything from dialysis to cardiac defibrillation to the fetal ultrasound and CT scanner, if Minor is even close to correct, the next decade will be transformative for medicine on a fundamental level.

Some of this transformation may be welcome - such as the ability to lighten the workload of overworked physicians and nurses who could have documentation, summaries, reports, orders and much else done much more efficiently, as well greater access to health care through apps and telehealth.

The ability to more efficiently monitor disease, and to sort through research and create new research models in a rapid manner could also lead to significant strides in the field of medicine.

But alongside potential advances, Scherz cautioned about unintended consequences, particularly with regard to the danger of “undermin[ing] the relationship between the patient and medical practitioner that is at the heart of medicine.” 

For instance, under the banner of efficiency, Scherz suggested that AI scheduling could reduce the time physicians have with patients, a focus on screens could turn the practitioner’s primary gaze away from patients, telehealth could undermine physical contact, and chatbots could remove the person of the practitioner altogether. 

Collier, author of an article in BMJ Leader titled ‘What is Medicine For?’, also warned of  debates in Western health care about the underlying purpose of medicine. Without resolving these disagreements, she said, the limits of AI are not clear. 

While medicine is sometimes viewed in the secular world as a merely technical enterprise, she said, at its core medicine is really about “caring for human beings facing sickness and death.“

“Medicine does not ‘build’ or ‘fashion’ or really ‘fix’ any thing. It is not a production that deals with things. Medicine cares for human beings.”

For Collier, getting the underlying anthropology right is critical in fostering a proper understanding of medicine.

Part of the push to embrace AI in medicine “is driven by scientific-economic forces that believe that mankind’s anthropology is machine-legible; that man and machine are closely analogous,” she said. “This is where we begin to cross pollinate human attributes with machine attributes, and vice versa.” 

This, she said, is “dangerous ground” because it “transects our humanity; it tears our humanity apart.” 

While Collier is grateful for advances in medical technology, she describes herself as “skeptical” about what will become of medicine as AI advances. She fears the risk of putting human beings and the practice of medicine at the service of AI-driven technologies, instead of the other way around.

Collier told The Pillar that religious institutions of health care must especially hold fast to their mission and identity in light of the coming AI onslaught. 

In his work, “Staying Human in an Era of Artificial Intelligence,” Vukov imagines a way of holding the line in health care.

Speaking to The Pillar, he pointed to health care for seniors as an example of a field in which AI presents both risks and opportunities.

While it is important to avoid a cultural slouch toward our loved ones being ‘cared for’ by AI-powered robots, Vukov said, people should also recognize the possibility of designing “human-centered” AI interventions that could better respect the image of God in the elderly.

Right now, he said, the human-driven system of ‘care’ for this population—what in many cases is little more than a large and lonely warehouse where people wait to die—falls woefully short of respecting their humanity. 

Vukov suggests that people can discipline AI to serve the goals of authentic health care in this context by freeing up health care providers from doing menial and even backbreaking tasks, thus  providing them more opportunities to have more genuine encounters with their patients and residents. 

Leave a comment

What else?

Fears about the advent of AI often take the form of doomsday scenarios - dramatic and catastrophic events drawn from the plots of popular movies and books in which robots go rogue and threaten the very survival of humanity.

AI-generated. Credit: Pillar Media.

But there are much more subtle threats posed by AI as well, Collier suggested. For example, if people begin to believe that humans and AI are alike as “conscious entities,” they might be  tempted to see themselves as objects for the purpose of productivity - things to be used and then discarded. 

These subtle shifts are also concerning to Vukov, who worries about the quiet ways AI could harm the world by “robbing it of its flavor” — by “muting creativity” in “a world painted in 50 shades of beige.” 

He argues that our primary worry should likely be the slow burn and cumulative effect this could have over time as life is flattened into the lowest common denominator in ways people will find it difficult to recognize. 

Vukov said he suspects AI will “seep into places we never expected, much as the internet has crept into our watches, audio systems, cars, and refrigerators.” 

Green hopes that society’s reaction to the current AI moment is that the prospect of such inhuman blandness, such soul destroying and encounter-less existence, will make it “obvious to us that what makes us human is our capacity to love and care for each other.” 

He and others with theological commitments in these areas are devoted to “making sure that AI is used to help people, care for human life, and show compassion to others” - by ensuring that the humans running the AI systems prioritize these values. 

“AI is a judgment upon us human beings who are creating and using it,” said Green. “If we use it wrongly, we will reap our own punishment. But if we use it wisely, we might come to live in a better world.”

Subscribe now

Comments 28