A university professor is warning his administration, and universities across the country, that AI chatbots are the future of plagiarism.
In a Facebook post earlier this month, Furman University philosophy professor Darren Hick recounted his first experience with a student who attempted to use an artificial intelligence chat software to cheat on an assignment. Hick caught the student because the AI’s work was shoddy and lacked any substance of the course material. However, he said that would not always be the case, and Hick said that universities need to develop ways to stop the use of software to complete assignments.
“Today, I turned in the first plagiarist I’ve caught using A.I. software to write her work, and I thought some people might be curious about the details,” Hick wrote on Facebook. The student used an AI chat website called ChatGPT, where users generate prompts and scripts and an AI software generates human-like responses. The prompt the student was given, and that she gave to the software, was to write 500 words about the English philosopher David Hume’s paradox of horror.
“ChatGPT responds in seconds with a response that looks like it was written by a human—moreover, a human with a good sense of grammar and an understanding of how essays should be structured,” Hick wrote. “In my case, the first indicator that I was dealing with A.I. is that, despite the syntactic coherence of the essay, it made no sense. The essay confidently and thoroughly described Hume’s views on the paradox of horror in a way that was thoroughly wrong. It did say some true things about Hume, and it knew what the paradox of horror was, but it was just bulls***ing after that. To someone who didn’t know what Hume would say about the paradox, it was perfectly readable — even compelling. To someone familiar with the material, it raised any number of flags.”
The professor concluded that such plagiarism would be easy to spot in high-level philosophy courses where the material is obscure. But for lower-level classes or other disciplines like literature or political science, the software is a “game-changer.”
Fortunately, Hick knew about software that could detect the chatbot software, and ran a check of the student’s essay; the software found a near-certainty that the essay was fake. In experimenting with ChatGPT himself, Hick noticed some of the AI’s tendencies in composition that could help point out future chatbot-produced essays, but he noted that because the software is based on a neural network that is always learning, it will get better and harder to detect.
“Administrations are going to have to develop standards for dealing with these kinds of cases, and they’re going to have to do it FAST,” Hick concluded. “In my case, the student admitted to using ChatGPT, but if she hadn’t, I can’t say whether all of this would have been enough evidence. This is too new. But it’s going to catch on. It would have taken my student about 5 minutes to write this essay using ChatGPT. Expect a flood, people, not a trickle.”
Hick added that until his university develops a protocol to stop AI plagiarism, he will attempt to counter it in his own way by giving an impromptu oral exam to a student suspected of using similar software.