Catholic Colleges Wrestle With AI in the Classroom| National Catholic Register
A viral video clip from this summer highlighted what many educators and students already knew about the changing nature of college education.Â
In the clip, a graduating UCLA student appears on a Jumbotron and holds up a laptop with ChatGPT pulled up, clearly flaunting the online tool and âthankingâ it for helping him on his final exams. The viral moment, though silly, sparked a fierce reaction online. For many people, the exuberant graduate, now headed for the real world, said the quiet part out loud â that AI is sucking the oxygen out of college classrooms and taking developing brains with it.Â
Educators and students at universities, Catholic and not, now must navigate a rapidly evolving landscape where AI presents unprecedented opportunities for learning coupled with significant risks, particularly concerning academic integrity and the development of critical thinking skills.
AI is no longer a new technology. Since OpenAIâs ChatGPT debuted at the end of 2022, its use has become almost ubiquitous among college students. Examining students across the board at both public and private institutions, recent surveys consistently show that more than 80% of students are using AI tools regularly.Â
That doesnât mean all of those students are using these tools to cheat, of course, but some certainly are: The widely used cheating-detection tool Turnitin reported that about 1 in 10 assignments run through its AI detection tool in the past year were 20% AI-generated, and about 3% of submitted assignments were more than 80% AI-generated.Â
Building a Foundation
For those who have never or rarely used a âgenerative AIâ tool like ChatGPT, its ability to produce large quantities of humanlike written content in seconds, with almost no effort expended on the part of the user, is akin to magic.Â
With just a few keystrokes, a user can ask ChatGPT to âgenerate an essay on the War of 1812â and, for better or worse, it will do so â drawing on thousands of websites and using its large language model (LLM) to affect humanlike writing by using probability to predict what word should come next.Â
There are some serious drawbacks to doing this, of course â while AI can be a powerful research tool, the AIâs writing is often fluffy, poorly constructed and bearing distinctive quirks, and, depending on the topic, riddled with errors and fabricated âfactsâ known as hallucinations.
Educators at colleges all across the country have had to wrestle with their studentsâ widespread use of these tools.Â
In the absence of a silver bullet for stopping AI use entirely, many professors have implored their students, with varying levels of success, to use AI only for ancillary, surface-level tasks such as generating practice tests to quiz themselves, exploring new ideas or soliciting feedback on their own writing â all use cases where AI can be surprisingly helpful.Â
As educator Heather Shellabarger notes in her book AI and Theological Pedagogy, AI is actually quite good at summarizing dense texts, generating outlines, providing personalized feedback, creating flashcards, and even simulating conversations and new ideas. But overuse, Shellabarger continued, runs the risk of students developing an âepistemic laziness,â where dependency on automation fosters âa weakening of the will and desire to truly know.â
Several educators at Catholic institutions of higher learning highlighted the ironic fact that students who overuse AI â i.e., those who let AI do their schoolwork for them â are not developing the essential foundational skills necessary to use AI responsibly in the future.Â
Julian Velasco, a professor at Notre Dame Law School, stressed the importance of developing a âsolid foundationâ of understanding of the legal field before integrating AI.Â
Velasco said he has developed a personal policy of not allowing first-year law students â who are still learning to âthink like lawyersâ â to use AI tools at all. Later in their education, Velasco allows his upperclassmen to use AI tools, but on the condition that they use them only after completing their assignments by hand.Â
In the field of law, AI can be extremely helpful, Velasco noted. Imagine having to read 100,000 pages as part of discovery for a case, for example, something AI could do in minutes.Â
Using AI for research, brainstorming or polishing writing can be fine, he thinks, but âhaving it do everything for you is clearly not legitimate.â
âI believe that [AI] is going to lead some people who know how to use it right to become more effective and more brilliant than ever before. But a large portion of the population is going to be seduced and become less effective than ever before, because, little by little, theyâll rely on it too much,â Velasco said.
Velasco said the varying approaches of different professors to the use of AI mirrors what is going on in the real world of law â some firms are embracing the use of AI, some are approaching it with caution, and others are banning it completely.Â
âThe reason to âdo the hard thingâ and not to use the tools is so that you can develop the foundation. Then you can make the right judgment as to how much is okay and how much is not okay,â he said.
John Goyette, vice president and dean emeritus of Thomas Aquinas College (TAC), which has campuses in California and in Massachusetts, penned an op-ed in The Wall Street Journal in May entitled âHow to Stop Students From Cheating With AI.âÂ
In his op-ed, which sparked a variety of reactions from educators around the country, Goyette touted TACâs educational model â a fixed liberal arts curriculum, combined with discussion-based classroom instruction â which he said helps to keep opportunities for AI cheating to a minimum. At colleges like TAC, where smartphones and laptops are banned from classrooms, cheating in general is far less likely, Goyette said.Â
Goyette told the Register the college also has clear policies against using AI for writing, considering it a form of plagiarism, with appropriately strict penalties.Â
Beyond the policies, Goyette said he tries to impress on his students that offloading their mental faculties to AI is akin to âhiring robots to work out at the gym for you.â
âAs the tool gets better, the temptation to use it is only going to grow,â he said.Â
âIf you want your students to really get an education where theyâre going to learn to think critically and develop the habits of mind that are necessary for human flourishing and for human freedom, itâs imperative to rely on methods that are older â that depend more on oral communication and conversation, whether that be oral exams or more class discussion or having them write in blue books with a pen and paper,â he said. âThatâs the only way that theyâre going to be able to develop habits of mind that are necessary to form the human person for happiness and freedom.â
Thatâs not to say that AI use is all negative, he added. Itâs true, for example, that AI can be very useful in various real-world fields if used correctly by people who already have developed the foundational skills. Goyette said he uses AI tools occasionally himself, and he thinks itâs appropriate for professors to get to know the tools their students are likely using.Â
âIâm not saying there are no uses of AI, but for the most part, I think education without AI is going to be better at developing habits of mind â critical thinking, careful reading, the ability to develop an argument and unfold a position to other people, [collaboration] with their peers,â Goyette said.Â
âMostly thatâs going to be best without AI involved because the studies show that students at most colleges and universities are using AI all the time for nearly all of their assignments, to the point where theyâre offloading critical thinking,â he said.
âI could imagine a very creative use of AI that wouldnât be opposed to developing critical thinking habits,â he noted, âbut I donât think thatâs how itâs mostly being used on campuses today.â
Michael Augros, a philosophy professor at TACâs Massachusetts campus, told the Register that in addition to discussion-based learning, he also has students get up in front of the board and demonstrate concepts to the class. He also said he tries to make his essay prompts as specific as possible, often incorporating topics discussed in classroom settings. The more specific the prompts and expectations are, the less successful AI tools will be at filling the bill.Â
Echoing Goyette, Augros also said he thinks professors should make use of AI themselves â in part so they know how the tools work and also because familiarity with the tools will make it easier for professors to spot AI-generated content.Â
Above all, though, Augros said he seeks to impress on his students that theyâre âripping themselves offâ if they have AI do their homework for them â that doing so turns AI into a tool of enslavement, not freedom.
âWhy would you want to farm out your own mental exercise?â Augros asks his students.Â
Campus Culture Is the Key
While even the most airtight rules cannot completely eliminate cheating â AI-facilitated or not â students at Catholic universities told the Register that the culture at those universities helps to foster an environment where most students are genuinely interested in learning and recognize that AI-generated papers are not the way to achieve that.Â
Benedictine College in Atchison, Kansas, for example, doesnât have an institution-wide policy on AI use. Right now, it is left to each professorâs discretion on how AI can or cannot be used in a class, Steve Johnson, Benedictineâs communications director, told the Register.Â
Many professors at Benedictine enforce a strict no-AI policy, however, especially in philosophy and theology courses. Similar to TAC, Benedictine also limits classroom technology, such as smartphones, more broadly.Â
Gabe Maday, a Benedictine senior philosophy and theology major from Michigan who is discerning a vocation with the Dominicans, told the Register that he sees AI as a double-edged sword.
A 20-page paper that might take a human a month takes ChatGPT 10 minutes, but the output is often poorly crafted, Maday said, and at the end of the day, itâs just not worth it to compromise oneâs integrity for the sake of a grade â especially at a small, Catholic college like Benedictine, where âprofessors know your nameâ and would almost certainly be personally disappointed in students if they cheated, he said. Â
Plus, âyou havenât learned anything if you get it to do your work for you,â Maday added. âIâve worked hard to get where I am, and putting my name on something a machine wrote would be terrible.âÂ
Maday said that while AI might speed up research, relying on it as a writing crutch cuts students off from the formative experience of struggling and grappling with difficult texts and ideas. As a budding scholar, Maday said he appreciates the process of building understanding and adding to his chosen field by reading about a subject deeply and writing about it.Â
âAs a search engine, itâs a great tool â an extension of Google, basically,â Maday said. âBut what I struggle with is when it starts to take over ideas.â
âGod created us with the capacity to have ideas and add to the world,â he continued. â[When you let AI do the work for you,] thatâs shut off for the sake of convenience.â
John Paul Doyle, a senior accounting major from Colorado, noted that AI is not verboten campus-wide at Benedictine â in fact, in several of his accounting classes, his professors have actually encouraged the use of AI, with the major caveat that students must double-check all of its outputs. Still, Doyle said he is personally wary of the tech and uses it only occasionally.Â
âFor some majors, AI is a tool. For others, like philosophy ⊠AI cannot philosophize. It canât actually write those papers,â he said. âIn business, it can summarize. In chemistry, it can be helpful. Policies should probably differ depending on the department.â
âThe business world is very in tune with it. Weâre testing the tools that are available,â Doyle said. âItâs foolish to teach in a way that doesnât let you use tools within the industry. But you have to fact-check the AI. On tax law, itâs only accurate about 50% of the time,â he continued.Â
Maday said he believes the debate over AI is opening up âlong overdueâ questions in education about the purpose of school and the extent to which universities exist to form the whole person and not just to print degrees.Â
âAt the end of the day, where do my priorities lie?â Maday reflected. âIf itâs just for the degree, then using ChatGPT makes sense. But if itâs about forming the whole person, then we shouldnât be relying on it.â
Mariele Courtois, a Benedictine theology professor and a member of the collegeâs AI Taskforce, told the Register that she thinks institutions can offer clear policies against academic dishonesty and a helpful framework for addressing AI.Â
This involves, she said, faculty within a discipline to deliberate together about the goals and contributions of their field and how to avoid AI use among students that would impact the studentsâ abilities to understand the content of the class or build the skillset promoted by the course.Â
âThe Catholic intellectual tradition provides resources for contemplating the value of work, the benefits of cultivating virtue and forming a strong conscience, and the truth of the human person. These are all relevant to engaging questions about AI which can threaten to distance us from our relationship to work, remove opportunities of building important habits, and tempt users to turn to a chatbot for moral counsel over turning to prayer and relationships,â Courtois said.Â
âUltimately, a Catholic educational community needs to discern ethical choices about AI in light of seeking relationship with God. We should motivate students to consider ways to positively apply AI to address needs and solve obstacles to human flourishing in the fullest sense.â