Is this “the death of academia?”

The release of OpenAI’s ChatGPT leaves many wondering what the future of education looks like in an increasingly digital world


Samir Shaik

In an article for The Atlantic on Dec. 6, 2022, just one week after ChatGPT’s release date, Canadian author Stephen Marche proclaimed that “the college essay is dead” and that “no one is prepared for how AI will transform academia.” Since then, the news has seen an enormous influx of different people—everyone from professors and computer scientists to average citizens—giving their input on what artificial intelligence means for academia and for the world.

As the keyboard clicks and the screen flickers to life, a sense of anticipation fills the air. The hum of the computer and the soft whir of the fans provide a background soundtrack to the task at hand. The words on the screen seem to come alive as the language model ChatGPT generates responses with an uncanny ability to understand and mimic human language.

None of that was written by a human. Instead, it was generated by an artificially intelligent tool known as ChatGPT, based on the prompt “write a sensory introduction to a newspaper article about yourself.” Developed by San Francisco-based research company OpenAI, ChatGPT can understand prompts of different natures—inquisitive, philosophical, creative, interpretive and much more—making it useful for many industries and fields of work. Since late November, when it became available to the general public, it has skyrocketed in popularity and relevance. The responses to its rapid rise have been varied—some are in awe, some skeptical, some afraid.

ChatGPT is a chatbot, an artificial intelligence (AI) that can mimic human conversations.  When a user gives it a prompt, the AI scans all written documents in its database that correspond to said prompt. Based on those, it then writes a response. Responses vary in length and structure, and no two responses are the same, even if the prompt is. Users can generate multiple responses to the same prompt as often as they want. Out of fear of students misusing its capabilities, Parkway has blocked access to ChatGPT from all school Chromebooks, and they aren’t the only ones doing this. Teachers like English ASC Kristen Witt are worried about ChatGPT’s consequences in terms of student work ethic. 

“[ChatGPT] would be used as something that can just write [students’] paper for [them] instead of [the students] doing the actual work,” Witt said. “What knowledge are we getting from that? What skills are we getting if we have computers just doing stuff for us? It’s just going to be regurgitated information rather than a true author’s voice.”

In this experiment, each prompt was generated three times and run through three different plagiarism checkers—Grammarly,, and Duplichecker—to see if ChatGPT really creates “original content.” (Samir Shaik)

English teacher Michelle Kerpash believes that it’s easy to catch when students have copied from ChatGPT because English teachers have dealt with plagiarism issues many times in the past and that ChatGPT isn’t as smart as some believe. In English, students often have to incorporate stylistic choices such as varied sentence structures and vocabulary words into their assignments, things that ChatGPT’s engine can’t intentionally encode.

“Imagine taking all the plagiarized material that’s ever existed and putting it into one paper. That’s how it reads to us [English teachers]. It feels like I’m reading a SparkNotes summary of a book,” Kerpash said. “[ChatGPT] can’t synthesize a book on its own; all it can do is pull from already published analyses of [it].”

Some students don’t consider ChatGPT as “cheating” per se but rather as a resource for them to use, like Grammarly. Witt disagrees with this, stating that Grammarly and ChatGPT’s functions are fundamentally different, so they can’t be compared accurately.

“[ChatGPT] is going strictly off of new content, while Grammarly help[s] improve spelling, structure, punctuation, things like that. I see it as something totally different,” Witt said.

However, that’s not how science teacher Charles Cutelli sees it. Cutelli has played around with ChatGPT in the past—asking it to write a letter of recommendation to a student and even a song about the appendicular skeleton for his Human Anatomy class. 

“We had some kids go home and perform [the song], so I think [ChatGPT] can be pretty fun,” Cutelli said. “It’s a great time-saver. It [allows] students to do other things that interest them.” 

Cutelli likens the situation with ChatGPT to “The Last Question,” a short story by renowned science-fiction author Issac Asimov. In it, humanity is reliant on a supercomputer AI known as MultiVac to answer all of their questions.

“[ChatGPT] is just the early MultiVac, and maybe millions of years from now [it] will have reached a new[ly] evolved state. [Right now] there are some hiccups, but my guess is every day that goes by, it probably gets [smarter and smarter],” Cutelli said.

This is the first time that AI-generative tools like ChatGPT have been used by students, leading many to question what the long-term consequences of it will be. Cutelli thinks that ChatGPT is simply too new to fully understand its consequences. He explains that ChatGPT, like anything else in life, has good and bad sides, citing a quote by American author and economist Thomas Sowell: “There are no solutions; there are only trade-offs.”

“[ChatGPT] has its drawbacks, but it’s also going to open up a world of possibilities,” Cutelli said. “We don’t know what we’ll lose until we get to the point where we’ve lost it already.” 

Beyond ChatGPT, technology has raised questions about what the school system will look like. AI has long been at the cutting edge of technological growth; ChatGPT is just the latest addition to this. Witt believes that technology is taking away important life and social skills in students.

“There used to be this human connection that doesn’t exist anymore,” Witt said. “When you [don’t have] those people skills, I think your society is going to go downhill.”

However, Kerpash thinks that students’ misuse of technology and resources has more to do with a character issue rather than something to blame on the technology itself.

“Technology in general makes it easy for people to act unethically. For example, math teachers have been battling Photomath for years now,” Kerpash said. 

Ever since technology has become the norm in schools, there has always been threats to the very concept of academic integrity; ChatGPT is not the first instance of this. And at technology’s current rate of growth, it’s plausible someone could even make a tool that can detect cheating with foolproof accuracy in the near future, effectively solving the problem that teachers have been fighting with for years. In fact, that solution might be closer than we think

“Whether it’s ChatGPT or something else entirely, how you use it and why is based on your character and not on the technology,” Kerpash said.

Can you tell the difference between ChatGPT and a real person? Take this interactive quiz to find out.