Skip to main content

Learning tool or BS machine? How AI is shaking up higher ed

caption: Katy Pearce, associate professor in the Department of Communication, is portrayed on Wednesday, Aug. 28, 2024, on the University of Washington campus in Seattle.
Enlarge Icon
Katy Pearce, associate professor in the Department of Communication, is portrayed on Wednesday, Aug. 28, 2024, on the University of Washington campus in Seattle.

As students and their professors head back to college classrooms and lecture halls this fall, the elephant in the room is ChatGPT.

The large-language model and others like it can correctly answer exam questions, write papers that would have taken hours to research, summarize complicated readings in convenient bullet points, and respond directly to professors’ feedback — all in a matter of seconds.

For students who can afford advanced versions of these homework-completing, test-taking, paper-writing AI-machines, what would traditionally have been called “cheating” becomes virtually untraceable.

Meanwhile, universities are scrambling to adopt AI policies, warning students about the dangers of sharing personal information and the inaccuracies and biases of AI-generated text, at the same time they encourage professors to incorporate AI into their teaching.

RELATED: ChatGPT infiltrates the arts world

University of Washington Associate Professor Katy Pearce has been walking the AI tightrope for the last two years. Pearce, a self-described “tech nerd,” teaches classes on technology and its impact on society, researches the use of technology in countries ruled by dictators, and edits the Journal of Computer-mediated Communication.

When ChatGPT was released in November 2022, Pearce was on maternity leave.

“I was a little isolated from some of the initial freakouts that many of my colleagues were having,” Pearce said. “When I came back to teaching, I was floored that AI was on the scene, but I have to say it wasn’t that I didn’t know it was coming.”

Since then, Pearce has spent an inordinate amount of time incorporating AI into her classes. She has an in-depth 2,000-word AI-statement that serves as a preamble for her course syllabus.

Yet despite her warnings that AI “is not a replacement for critical thinking and writing” and often gets things wrong, she said student use of AI on assignments, tests, and homework is widespread and virtually unchecked.

“This is one of the premiere universities on the West Coast,” she said, “and it is rampant.”

AI as TA

While her initial reaction to AI was similar to a lot of her colleagues — “Oh, no, students are using this tool to cheat” — Pearce quickly recognized ways she could use AI to improve her teaching, her exams harder to cheat on, and her life easier.

RELATED: Microsoft joins plea for government regulation of AI tools like ChatGPT

“Even though I still have a lot of concerns about AI in the classroom and AI misconduct, I also feel like a lot of the tedious tasks in my personal life, in my work life, I just immediately offload them to AI,” Pearce said. “I have an AI window open all day long. It is like my personal assistant, teaching assistant, little buddy.”

Pearce has enough technical know-how to adapt AI so that it best serves her needs. Instead of defaulting to an AI universe that includes “all the internet,” she limits it to include everything related to her course — textbook chapters, her notes, her lecture, the assigned readings, and transcripts of videos viewed by her class.

With that knowledge base, her “little buddy” can conjure creative ideas for classroom activities, essay prompts, and quiz questions. She even created a bot that knows the material and can respond to students’ questions in real time, giving the same answers Pearce would.

“I just met with a student to talk about it,” Pearce said. “She was like, ‘This is amazing.’”

Power shift

Pearce warns her students that generative-AI platforms, at least the current versions, are producing what she views as C-level work.

caption: University of Washington students walk up a set of stairs toward the Suzzalo Library on the first day of school, Wednesday, September 29, 2021, on the University of Washington campus in Seattle.
Enlarge Icon
University of Washington students walk up a set of stairs toward the Suzzalo Library on the first day of school, Wednesday, September 29, 2021, on the University of Washington campus in Seattle.
KUOW Photo/Megan Farmer

Pearce has spent a lot of time changing her teaching to incorporate and use AI, but she has spent as much or more time pursuing student misconduct for turning in AI-generated work. In the past two years, she has documented roughly two dozen cases, each of which took two to four hours to submit.

During these investigations, Pearce has seen a change in the attitude of students who get caught turning in work generated by AI. In what she calls “the before times,” students would generally admit that they cheated.

“They would break and confess because they didn't really have any other options. Students now, they are probably aware that there's no real proof about what they did, so they will hold steady,” she said. “The power has shifted.”

Is AI equitable?

Another issue Pearce has encountered — AI equity issues.

While many generative-AI platforms started off as free, they are now introducing paid levels and limiting users to a certain number of uses within a certain time frame. Students who can afford to pay are able to use AI more easily on tests or assignments without detection.

caption: James Manyika speaks about Responsible AI at a Google I/O event in Mountain View, Calif., Tuesday, May 14, 2024.
Enlarge Icon
James Manyika speaks about Responsible AI at a Google I/O event in Mountain View, Calif., Tuesday, May 14, 2024.

For example, professors at schools that use Canvas as their web-based learning management tool can see when students leave their browser window for short periods of time during a test, often a sign that they are copying and pasting test questions into ChatGPT or another AI tool.

But students who can afford $24 per month for a tool such as StudyBuddy can take screenshots of questions and get answers without ever appearing to leave the Canvas browser. The StudyBuddy website promotes its product as providing “instant answers” that are “undetectable & plagiarism free.”

Pearce has seen students hit ChatGPT’s prompt limit and ask her for extensions so they can use the AI tool when their time resets. GPT-4o has a limit of 10 AI messages every five hours. Students who have the capacity to pay $20 per month for ChatGPT Plus increase their prompt limit to 40 every three hours.

Is AI use really "rampant"?

Because ChatGPT and other generative AI models are still relatively new, it is hard to say what percentage of students are using them.

A fall 2023 study by Tyton Partners, a consultant company contracted by the creators of the plagiarism tool Turnitin, found that half of students are using some form of AI. The study, which surveyed more than 1,000 faculty members and 1,600 postsecondary students, found that three out of four students who use AI to complete assignments said they would continue to do so even if their professors or institutions banned the technology.

A Stanford study done around the same time found that the percentage of high school students who admit to “cheating” with or without ChatGPT has remained relatively unchanged — about two-thirds of all students.

caption: Zoe Pomeroy, 18, of Madrona, sits at the edge of Red Square on the University of Washington Seattle campus.
Enlarge Icon
Zoe Pomeroy, 18, of Madrona, sits at the edge of Red Square on the University of Washington Seattle campus.
Stephen Howie / KUOW

Zoe Pomeroy, an 18-year-old Seattle Prep graduate from Madrona, has so far mostly avoided using AI for assignments. This fall, Pomeroy will attend Trinity College in Dublin, Ireland, where she plans to study political and cultural interactions between the Middle East and Europe.

Pomeroy said she did use ChatGPT for an AP government class where the assignment was to use AI to come up with a speech from the perspective of a member of Congress.

“It was really interesting, because when everybody shared them, they all sounded really, really similar, and they all had similar lines that were almost identical,” she said. “I guess that showed that it isn't original work. It's just spewing out stuff that's already out there.”

BFF or just BS?

At the University of Washington, Pearce uses AI prompts to develop class activities, respond to students in creative ways, and even plan her vacations.

But some professors see the pervasiveness of AI as a dangerous trend that grants too much power to a tool that lacks the capacity to recognize meaning or truth.

Michael Townsen Hicks, James Humphries, and Joe Slater teach at the University of Glasgow and are co-authors of a paper published in Ethics and Information Technology in June 2024 titled, “ChatGPT is bullshit.”

Hicks and his co-authors worry that students who use AI as a research tool in their early years at college won’t build the reading and writing skills needed to address more challenging assignments as juniors and seniors.

RELATED: A Seattle English teacher on ChatGPT

As an 18-year-old incoming freshman, Pomeroy also worries that ChatGPT could become a crutch she relies on instead of putting in the effort to develop her academic skills.

“The whole thing with AI is it's just cycling information that's already out there, already on the internet, so we're not coming up with new ideas,” Pomeroy said. “I think it's like melting our brains if we're not critically thinking on our own, and we're just relying on a computer to make something that's good enough for us to submit to teachers.”

AI-generated reviews

AI-generated academic work might not be limited to what students are submitting to teachers.

Chahat Raj, who is pursuing a PhD in computer science at George Mason University, believes the peer reviews she has been receiving from online platforms that help researchers get their papers accepted in high-impact journals and prestigious conferences have been generated by bots.

caption: Chahat Raj studies equity issues in generative artificial intelligence. She believes some of her papers submitted for online review were critiqued using AI.
Enlarge Icon
Chahat Raj studies equity issues in generative artificial intelligence. She believes some of her papers submitted for online review were critiqued using AI.
Stephen Howie / KUOW

Raj, who spent the summer in Seattle as a visiting academic scholar at the University of Washington, researches the inherent bias in generative AI.

She started noticing those tell-tale characteristics in her peer reviews — bold text followed by bullet-pointed lists that were informative but not insightful, flawless paragraphs with no grammatical errors that lacked the occasional tangents typical of human writing, and a series of points but without the critical thinking or analysis she would expect from academic experts.

She also noticed that elements in the appendices of her papers were often missed by her online reviewers. She knew that some AI tools impose a size limit when processing PDF submissions, which could result in appendices not being included in the review process.

“They have these repetitive points like no robustness in your experiments. Data is not good, no good experiments, or lack of human validation,” Raj said, “which kept resurfacing across different submissions on different platforms.”

To test her theory, Raj uploaded a recent paper into ChatGPT and another AI-platform called Claude. The AI response was nearly identical to the reviews she received from her online peers.

“They just copied and pasted all the stuff there,” Raj said. “They didn’t even read it.”

If Raj is right, not only is she getting potentially faulty feedback about her work, she has also fed the system she is trying to study. Her original research may have been added to the AI dataset and could be used as part of answers to future prompts.

AI guidelines

Pearce is on a UW committee that starts this fall to explore how AI can be used in the classroom in more positive ways.

Meanwhile, the university has issued “interim guidelines” for AI and created an AI Task Force to come up with a more permanent policy.

The Glasgow professors remain unconvinced that AI is a magic tool capable of helping humans accomplish great things, like curing cancer or solving climate change.

“If you are totally new to any of this technology, you ask ChatGPT some questions or ask it to answer an essay question, I think it's quite easy to be amazed at how impressive it is,” Hicks said. “It seems like magic.”

But Hicks believes it is more likely, based on the way AI works, that models such as ChatGPT will foster confusion and deepen divisions than solve global problems.

“Adding a giant machine that doesn't really care about the truth, and then getting students to use it instead of doing research that involves double checking things, has the potential to further erode our connection to reality and truth,” he said.

RELATED: Scarlett Johansson says she is 'shocked, angered' over new ChatGPT voice

In hopes of convincing the public that AI is more than what Humphries calls “algorithms on steroids,” major companies like Google, Meta, and Microsoft are investing billions to create and distribute large language model programs and integrate them into every browser, program, and smartphone. According to the firm iSpot, tech companies have spent close to $200 million to promote AI in national TV commercials in 2024.

Even students who don’t search for answers on ChatGPT, Claude, or Gemini, are finding that AI is coming to them in programs they already use. Microsoft is integrating CoPilot into Office. Programs that used to help students with their writing, like Grammarly and Quillbot, now offer to rewrite entire paragraphs.

Google searches are now topped by a starred section called “AI Overview” that summarizes results in short paragraphs and bullet-pointed lists using generative AI.

Other AI tools help professors grade their students, and still others transcribe notes for students by organizing and summarizing in-class or online lectures.

Time won back

Pearce believes the use of AI by students has become so commonplace that she doubts whether they even think about using computer-generated answers or essays as breaking the rules.

caption: University of Washington students walk through Red Square on the first day of school, on Wednesday, September 29, 2021, in Seattle.
Enlarge Icon
University of Washington students walk through Red Square on the first day of school, on Wednesday, September 29, 2021, in Seattle.
KUOW Photo/Megan Farmer

“When the students are doing it, are they actively thinking like, this might be the time I get caught when they click ‘Submit’?” She asked. “Or has it become such a normalized part of the way they're working that they're not even thinking about it that way?”

When it comes to her own life, Pearce is determined to use the time she has saved using AI to be with her kids.

She shares her expertise on maintaining work/life balance with the help of AI in a series of webinars called, "AI for Academic Moms and Other Caregivers: Balancing Teaching, Research, and Life."

In the same way, she hopes that having AI do simple tasks can free up students to devote themselves to deeper discussions and more important questions.

“Hopefully that will open up opportunities for engagement in art or creative thinking or doing things for the community,” she said. “That time won back could really be a positive thing.”

Why you can trust KUOW