A viral video clip from this summer highlighted what many educators and students already knew about the changing nature of college education.
In the clip, a graduating UCLA student appears on a Jumbotron and holds up a laptop with ChatGPT pulled up, clearly flaunting the online tool and “thanking” it for helping him on his final exams. The viral moment, though silly, sparked a fierce reaction online. For many people, the exuberant graduate, now headed for the real world, said the quiet part out loud — that AI is sucking the oxygen out of college classrooms and taking developing brains with it.
Educators and students at universities, Catholic and not, now must navigate a rapidly evolving landscape where AI presents unprecedented opportunities for learning coupled with significant risks, particularly concerning academic integrity and the development of critical thinking skills.
AI is no longer a new technology. Since OpenAI’s ChatGPT debuted at the end of 2022, its use has become almost ubiquitous among college students. Examining students across the board at both public and private institutions, recent surveys consistently show that more than 80% of students are using AI tools regularly.
That doesn’t mean all of those students are using these tools to cheat, of course, but some certainly are: The widely used cheating-detection tool Turnitin reported that about 1 in 10 assignments run through its AI detection tool in the past year were 20% AI-generated, and about 3% of submitted assignments were more than 80% AI-generated.
Building a Foundation
For those who have never or rarely used a “generative AI” tool like ChatGPT, its ability to produce large quantities of humanlike written content in seconds, with almost no effort expended on the part of the user, is akin to magic.
With just a few keystrokes, a user can ask ChatGPT to “generate an essay on the War of 1812” and, for better or worse, it will do so — drawing on thousands of websites and using its large language model (LLM) to affect humanlike writing by using probability to predict what word should come next.
There are some serious drawbacks to doing this, of course — while AI can be a powerful research tool, the AI’s writing is often fluffy, poorly constructed and bearing distinctive quirks, and, depending on the topic, riddled with errors and fabricated “facts” known as hallucinations.
Educators at colleges all across the country have had to wrestle with their students’ widespread use of these tools.
In the absence of a silver bullet for stopping AI use entirely, many professors have implored their students, with varying levels of success, to use AI only for ancillary, surface-level tasks such as generating practice tests to quiz themselves, exploring new ideas or soliciting feedback on their own writing — all use cases where AI can be surprisingly helpful.
As educator Heather Shellabarger notes in her book AI and Theological Pedagogy, AI is actually quite good at summarizing dense texts, generating outlines, providing personalized feedback, creating flashcards, and even simulating conversations and new ideas. But overuse, Shellabarger continued, runs the risk of students developing an “epistemic laziness,” where dependency on automation fosters “a weakening of the will and desire to truly know.”
Several educators at Catholic institutions of higher learning highlighted the ironic fact that students who overuse AI — i.e., those who let AI do their schoolwork for them — are not developing the essential foundational skills necessary to use AI responsibly in the future.
Julian Velasco, a professor at Notre Dame Law School, stressed the importance of developing a “solid foundation” of understanding of the legal field before integrating AI.
Velasco said he has developed a personal policy of not allowing first-year law students — who are still learning to “think like lawyers” — to use AI tools at all. Later in their education, Velasco allows his upperclassmen to use AI tools, but on the condition that they use them only after completing their assignments by hand.
In the field of law, AI can be extremely helpful, Velasco noted. Imagine having to read 100,000 pages as part of discovery for a case, for example, something AI could do in minutes.
Using AI for research, brainstorming or polishing writing can be fine, he thinks, but “having it do everything for you is clearly not legitimate.”
“I believe that [AI] is going to lead some people who know how to use it right to become more effective and more brilliant than ever before. But a large portion of the population is going to be seduced and become less effective than ever before, because, little by little, they’ll rely on it too much,” Velasco said.
Velasco said the varying approaches of different professors to the use of AI mirrors what is going on in the real world of law — some firms are embracing the use of AI, some are approaching it with caution, and others are banning it completely.
“The reason to ‘do the hard thing’ and not to use the tools is so that you can develop the foundation. Then you can make the right judgment as to how much is okay and how much is not okay,” he said.
John Goyette, vice president and dean emeritus of Thomas Aquinas College (TAC), which has campuses in California and in Massachusetts, penned an op-ed in The Wall Street Journal in May entitled “How to Stop Students From Cheating With AI.”
In his op-ed, which sparked a variety of reactions from educators around the country, Goyette touted TAC’s educational model — a fixed liberal arts curriculum, combined with discussion-based classroom instruction — which he said helps to keep opportunities for AI cheating to a minimum. At colleges like TAC, where smartphones and laptops are banned from classrooms, cheating in general is far less likely, Goyette said.
Goyette told the Register the college also has clear policies against using AI for writing, considering it a form of plagiarism, with appropriately strict penalties.
Beyond the policies, Goyette said he tries to impress on his students that offloading their mental faculties to AI is akin to “hiring robots to work out at the gym for you.”
“As the tool gets better, the temptation to use it is only going to grow,” he said.
“If you want your students to really get an education where they’re going to learn to think critically and develop the habits of mind that are necessary for human flourishing and for human freedom, it’s imperative to rely on methods that are older — that depend more on oral communication and conversation, whether that be oral exams or more class discussion or having them write in blue books with a pen and paper,” he said. “That’s the only way that they’re going to be able to develop habits of mind that are necessary to form the human person for happiness and freedom.”
That’s not to say that AI use is all negative, he added. It’s true, for example, that AI can be very useful in various real-world fields if used correctly by people who already have developed the foundational skills. Goyette said he uses AI tools occasionally himself, and he thinks it’s appropriate for professors to get to know the tools their students are likely using.
“I’m not saying there are no uses of AI, but for the most part, I think education without AI is going to be better at developing habits of mind — critical thinking, careful reading, the ability to develop an argument and unfold a position to other people, [collaboration] with their peers,” Goyette said.
“Mostly that’s going to be best without AI involved because the studies show that students at most colleges and universities are using AI all the time for nearly all of their assignments, to the point where they’re offloading critical thinking,” he said.
“I could imagine a very creative use of AI that wouldn’t be opposed to developing critical thinking habits,” he noted, “but I don’t think that’s how it’s mostly being used on campuses today.”
Michael Augros, a philosophy professor at TAC’s Massachusetts campus, told the Register that in addition to discussion-based learning, he also has students get up in front of the board and demonstrate concepts to the class. He also said he tries to make his essay prompts as specific as possible, often incorporating topics discussed in classroom settings. The more specific the prompts and expectations are, the less successful AI tools will be at filling the bill.
Echoing Goyette, Augros also said he thinks professors should make use of AI themselves — in part so they know how the tools work and also because familiarity with the tools will make it easier for professors to spot AI-generated content.
Above all, though, Augros said he seeks to impress on his students that they’re “ripping themselves off” if they have AI do their homework for them — that doing so turns AI into a tool of enslavement, not freedom.
“Why would you want to farm out your own mental exercise?” Augros asks his students.
Campus Culture Is the Key
While even the most airtight rules cannot completely eliminate cheating — AI-facilitated or not — students at Catholic universities told the Register that the culture at those universities helps to foster an environment where most students are genuinely interested in learning and recognize that AI-generated papers are not the way to achieve that.
Benedictine College in Atchison, Kansas, for example, doesn’t have an institution-wide policy on AI use. Right now, it is left to each professor’s discretion on how AI can or cannot be used in a class, Steve Johnson, Benedictine’s communications director, told the Register.
Many professors at Benedictine enforce a strict no-AI policy, however, especially in philosophy and theology courses. Similar to TAC, Benedictine also limits classroom technology, such as smartphones, more broadly.
Gabe Maday, a Benedictine senior philosophy and theology major from Michigan who is discerning a vocation with the Dominicans, told the Register that he sees AI as a double-edged sword.
A 20-page paper that might take a human a month takes ChatGPT 10 minutes, but the output is often poorly crafted, Maday said, and at the end of the day, it’s just not worth it to compromise one’s integrity for the sake of a grade — especially at a small, Catholic college like Benedictine, where “professors know your name” and would almost certainly be personally disappointed in students if they cheated, he said.
Plus, “you haven’t learned anything if you get it to do your work for you,” Maday added. “I’ve worked hard to get where I am, and putting my name on something a machine wrote would be terrible.”
Maday said that while AI might speed up research, relying on it as a writing crutch cuts students off from the formative experience of struggling and grappling with difficult texts and ideas. As a budding scholar, Maday said he appreciates the process of building understanding and adding to his chosen field by reading about a subject deeply and writing about it.
“As a search engine, it’s a great tool — an extension of Google, basically,” Maday said. “But what I struggle with is when it starts to take over ideas.”
“God created us with the capacity to have ideas and add to the world,” he continued. “[When you let AI do the work for you,] that’s shut off for the sake of convenience.”
John Paul Doyle, a senior accounting major from Colorado, noted that AI is not verboten campus-wide at Benedictine — in fact, in several of his accounting classes, his professors have actually encouraged the use of AI, with the major caveat that students must double-check all of its outputs. Still, Doyle said he is personally wary of the tech and uses it only occasionally.
“For some majors, AI is a tool. For others, like philosophy … AI cannot philosophize. It can’t actually write those papers,” he said. “In business, it can summarize. In chemistry, it can be helpful. Policies should probably differ depending on the department.”
“The business world is very in tune with it. We’re testing the tools that are available,” Doyle said. “It’s foolish to teach in a way that doesn’t let you use tools within the industry. But you have to fact-check the AI. On tax law, it’s only accurate about 50% of the time,” he continued.
Maday said he believes the debate over AI is opening up “long overdue” questions in education about the purpose of school and the extent to which universities exist to form the whole person and not just to print degrees.
“At the end of the day, where do my priorities lie?” Maday reflected. “If it’s just for the degree, then using ChatGPT makes sense. But if it’s about forming the whole person, then we shouldn’t be relying on it.”
Mariele Courtois, a Benedictine theology professor and a member of the college’s AI Taskforce, told the Register that she thinks institutions can offer clear policies against academic dishonesty and a helpful framework for addressing AI.
This involves, she said, faculty within a discipline to deliberate together about the goals and contributions of their field and how to avoid AI use among students that would impact the students’ abilities to understand the content of the class or build the skillset promoted by the course.
“The Catholic intellectual tradition provides resources for contemplating the value of work, the benefits of cultivating virtue and forming a strong conscience, and the truth of the human person. These are all relevant to engaging questions about AI which can threaten to distance us from our relationship to work, remove opportunities of building important habits, and tempt users to turn to a chatbot for moral counsel over turning to prayer and relationships,” Courtois said.
“Ultimately, a Catholic educational community needs to discern ethical choices about AI in light of seeking relationship with God. We should motivate students to consider ways to positively apply AI to address needs and solve obstacles to human flourishing in the fullest sense.”