What the humanities can offer in this technological moment
The growth of artificial intelligence is opening a conversation for
which ethicists and theologians are uniquely qualified.

(Century illustration | Source image by David Griffiths on Unsplash)
I spend my days thinking and writing and teaching about purpose. I’m a theologian, so officially I think and write and teach about God and God-related things. But I understand my vocation as having made the choice to live inside the question, So what? The world is here, and so am I, and so are you. So what? This is the realm of theology.
It is also the realm of philosophy, the discipline to which theology is both indebted and somewhat inextricably bound. Personal dynamics aside, theologians and philosophers do much of the same work on campuses, especially with undergraduate students, by asking questions that get to the heart of the human experience. And now we can broaden this project to other disciplines that see themselves this way: history, visual and performing arts, and literature—the cluster of human experience–centered disciplines we call the humanities.
Read our latest issue or browse back issues.
I’ve recently been having a lot of conversations about artificial intelligence. Inevitably, these conversations begin with a mix of conspiracy theories, frustration, and fear: Who is watching us and listening to us? Who is really in control here? What is happening with all that data? And to sum up a lot of it: Cui bono? (Who is benefiting?) Some people I talk to feel secure that AI could not replace them: No robot could ever do what I do. Others feel the threat acutely: I’ve already seen AI starting to replace many of my daily tasks at work.
It’s right around this point in the conversation that I feel a surge of pedagogical energy. Other teachers will know this moment well: a student says something in class that opens the perfect door and you rush through it excitedly, making just the right connection or raising the next well-timed question that moves the class forward. I live for these moments. To feel a conversation move like that, to gesture to the open door where students had previously thought there was a brick wall—it inspires my faith in the power of education. It is in these moments that the conversation turns to the biggest questions of the human experience, to the “so what?” questions. They help us see the unity of truth across disciplines, exploring how various approaches and bodies of knowledge can be brought to bear on this question of purpose.
Nevertheless, those of us in the humanities feel an urgency to defend our existence. As soon as I forget about it and return my focus to my teaching or writing, the phrase “the challenges of higher education” pops up in a meeting, usually on the lips of an administrator who wants me to think about how to recruit and retain students. My faculty colleagues and I nod in dutiful agreement, but I have to admit that sometimes I spend the rest of the meeting thinking about what exactly those challenges are. For my administration (and many others, I’m sure), the challenges are primarily financial: fewer students of traditional college age populate our recruitment area, and those who do are less and less inclined to go to college.
I teach at a relatively small Catholic university in the Northeast. Our regional reputation is dominated by the strength of our nursing program, and for good reasons. But the university has maintained a commitment to the liberal arts despite the scheduling and financial challenges. All around us, departments, schools, and entire colleges are closing, citing a lack of interest from students and a lack of relevance to the job market. They don’t usually say the second part as bluntly, but they sure do mean it. In one recent report, Vermont State University evaluated its programs, justifying their consolidation or closure: “Fine Arts is not identified as a critical occupation in Vermont.” I’m picking on Vermont, but it’s just one example of an ongoing evaluation of the humanities according to relevance, which is of course defined almost entirely in terms of the market.
Cue a deluge of articles about the great financial prospects of humanities majors. These articles excite and energize humanities faculty across the country; one can almost see them waving these articles (yes, they printed them out) in faculty meetings, shouting, “See? We matter! Our graduates make money, too!”
This line of defense may in fact save our disciplines from the chopping block of US higher education. Money is important. We have to eat, and so do our graduates. But the “relevance” of humanities education is not about the salaries of our graduates so much as the fact that the disciplines and industries our graduates probably won’t end up in need us, whether they know it or like it or not. To the extent that these disciplines and industries make use of humanities expertise, they might actually be able to live up to the ideals they so often put forward in mission statements. Indeed, mission statements themselves are a product of humanities-style thinking about culture and purpose.
To be clear, when I say they need us, I don’t mean, “It would be really great for them to have some humanities graduates on staff to edit the writing or whatever.” I mean they need us in an existential way. At the risk of sounding dramatic, the extent to which they rely on our expertise might just alter the direction of the planet and our species. In fact, let’s dispense with the us/them dichotomy altogether. If what I’m saying is true, we all need the humanities—theology and philosophy in particular—and for reasons much more important than their relevance to the market.
In my courses, it takes careful work to cultivate a critical lens for technological culture. For many casual observers of technology, including my students, it is important to first acknowledge that technology is not valueless or somehow neutral. Most people engage with various technological tools under the assumption that their usefulness somehow implies neutrality. One often hears, for example, that it’s not the technology itself but how you use it.
This is untrue at best and dangerous at worst. Our technologies not only shape our experiences, they also reflect what their developers consider important or unimportant. Some ideas and values are highlighted, and others are silenced. Students often enter my courses focused primarily on the effects of technology, with little skill in analyzing the technology itself—its history, development, and ideological valence.
See, for example, the website of OpenAI, the company that developed ChatGPT and DALL-E—two generative artificial intelligence tools, one textual and the other visual. Generative AI differs from AI assistants like Alexa and Google Home, known as natural language processing AI, by generating responses or images instead of just fetching them. This, of course, has serious implications for any realm in which writing or creativity occurs, not the least of which is education.
OpenAI’s site is full of language that seems meant to head off anxiety around the technologies it is developing. The landing page even includes a “Safety & responsibility” heading in big type, with a link that leads to a page titled, “Developing safe and responsible AI.” (As I tell my students when we study the Torah, the rules wouldn’t be there unless someone was breaking them or at least thinking about breaking them.) The page includes a quote (again, in huge type) from OpenAI chief technology officer Mira Murati: “The key is to ensure that these machines are aligned with human intentions and values.” This, of course, points to questions about the concept of values alignment, an interesting blend of ethical theory and technological capability.
But now we’re in the realm of the humanities, of the allegedly irrelevant or noncritical disciplines. In the humanities, we know that values, safety, responsibility, and intention are hugely important philosophical and cultural concepts—and that we cannot assume we all mean the same thing when we deploy these terms.
At present, companies like OpenAI seem, in comments like this one, to at least acknowledge that their technologies are raising questions on the level of the human experience writ large. How they proceed in engaging these questions, however, remains to be seen.
The charter of OpenAI is a case study in ethical theory. Thinking beyond current work in generative artificial intelligence, OpenAI preemptively commits to “responsibility” regarding artificial general intelligence, which refers to AI that approximates human agency—currently still the stuff of science fiction. But generative AI such as ChatGPT is a huge step toward AGI technology, and OpenAI recognizes this: “We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.” It is obvious that ethical theory and ethics education have a natural place here, should these companies make space for them.
At the center of theological education lie questions of purpose, goodness, beauty, and truth. Under the layers of conspiracy theories, political ideology, and the swirling pot of identity politics that passes for civil discourse in our culture, we are once again coming face-to-face daily with such questions. Such conversations allow us to reengage the question of vocation: the idea that our individual lives may be part of the unfolding of the universe itself and that we have specific gifts for meaningful work lives.
The force of capitalism has encouraged people to construct their identities around their work and productivity. It is not yet clear the degree to which AI will actually replace jobs, but people—including our students—see its potential, and it is throwing them into something of an existential crisis. Those of us who have devoted our lives to the deepest mysteries of the human experience find ourselves in a strange moment: the technocratic paradigm that so often threatens our disciplines has developed a technology that is opening a conversation for which we are uniquely qualified. What remains to be seen is how we will make use of this moment—and to what degree our disciplines can be brought to bear on our new technological reality.