Point/Counterpoint: Educators and Students Weigh in on ChatGPT

Can AI help or hurt higher education?

chatgpt education

Does ChatGPT have a role in education?

University-based instructors make the case for AI’s potential to level the educational playing field. Meanwhile, one Temple journalist argues that generative AI could defeat the purpose of education.

Educators For ChatGPT

In Philadelphia, as across the globe, education stands at a transformative crossroads thanks to advances in artificial intelligence, particularly generative AI (genAI). To some educators, this subset of AI technologies — which includes ChatGPT — comes with risks like a rise in cheating and a reduction in critical thinking. But what are the benefits for our students and teachers?

Imagine a classroom where all students receive personalized learning materials that adapt to their individual pace and learning preferences. GenAI makes this possible by enabling the creation of dynamic learning environments. Schools in other parts of the world, including Sweden and China, have actively implemented AI-powered tools that are not just responsive but are designed to evolve with the learner. For example, if a student excels in math but struggles with reading, an AI system like Squirrel AI can adjust content accordingly, ensuring that each student can thrive. The harsh reality for many schools is that limited funding and resources, combined with overcrowded classrooms, prevent teachers from providing such personalized lesson-planning on their own.

Moreover, these genAI-driven systems aren’t limited to just tailoring educational content. They also excel in providing real-time, personalized feedback. AI can act as a virtual tutor. It can engage in meaningful dialogues, answer student queries instantly, and provide explanations specific to those students’ immediate educational needs. Traditional methods often delay feedback until tests or papers are graded, but AI systems can offer immediate insights and corrections. In this way, genAI can play a powerful role in helping struggling writers with their long-term proficiency, which can have the secondary effect of creating a more equitable educational landscape — especially for immigrants new to English.

Furthermore, genAI’s capabilities in analyzing vast amounts of educational data can revolutionize how educational paths are designed. GenAI can detect underlying patterns in student performance and suggest customized resources that align with their learning needs and preferences. This level of customization was unthinkable a few decades ago but is now a reality that could become as common as textbooks in our classrooms.

The power of genAI in education extends beyond personalized learning. It also enhances self-regulation among students. With better tools for setting goals, monitoring progress, and evaluating strategies, students can take charge of their learning journey. They become not just participants in their education, but drivers of it.

That said, genAI isn’t perfect. It makes errors and sometimes includes sexist and racist content. We must be wary of the generalizations that it makes. Yet this, too, can be a tool for education. By cultivating our children’s critical thinking, we can promote critiques of the information offered and help children become skeptical of information that appears online.

We must be sensitive, though, to overuse. Some children are spending far too much time on screens already.

As a society, we need to focus less on how students may use genAI inappropriately and more on its potential for future generations. When GenAI systems are designed and implemented in ways that reflect the science of effective teaching and learning, they may lead us to an educational system in which every student, regardless of background or learning style, has access to the best personalized education. — Roberta Michnick Golinkoff, Joshua Wilson, Amanda Delgado and Kathy Hirsh-Pasek 

A Student Against ChatGPT

Every student has heard it: the Great ChatGPT Debate. Is it a cheating tool? Or is it the steady march of progress — use it or risk getting left behind?

Although opinions differ within the college bubble, I tend to hear one common narrative about generative AI (a catchall for systems like ChatGPT), which goes something like this: Students are increasingly handing in work that’s written — wholly or in part — by these cutting-edge tools. Meanwhile, teachers and administrators are freaking out. It’s true, somewhat. “I’ll ChatGPT it” is a phrase I’ve heard more and more in recent months from classmates sweating a deadline. Some use it to draft an essay, others just to edit their work. And then there’s me.

As if to acknowledge that academia simply can’t keep kids away from these tools, a professor of mine recently dedicated a day to playing around with the chatbot. She challenged us to test whether ChatGPT could be harnessed as a useful tool. Our first task: We wrote short paragraphs to run through ChatGPT, instructing the chatbot to check for grammar and spelling. It made some helpful edits — but it made just as many mistakes. Oddly, ChatGPT often claimed to have made changes it didn’t actually provide, giving itself credit for copy in the original. Though at least it admitted to lying once called on it.

Sure, some of my peers use ChatGPT to help them brainstorm, organize their thoughts, or edit their grammar and syntax — situations in which I can see the chatbot being useful. But I just don’t understand the hype.

The chatbot isn’t actually very good at responding to prompts with factual information. It’s been known to falsify quotes, sources and citations. It’s a common enough occurrence that a term has been coined to describe when ChatGPT just makes things up: “AI hallucinations.” The old adage has it that not everything on the internet is true. Neither is everything on ChatGPT.

Possibly more worrying than the shoddy results produced by ChatGPT is this: When it comes down to it, everything the chatbot produces is stolen. The chatbot has been trained on hundreds of gigabytes of information from the internet, which is turning into a huge intellectual property issue. Were IP holders compensated for the use of their IP to train ChatGPT? Are they paid every time the chatbot uses some of what it learned from them to answer a question? I think we all know the answer to that one.

Proponents of ChatGPT say it will get better and better with time, as each additional use trains the chatbot with more data. But tech journalist Jack Clark recently wrote that the opposite could be true: “You can break AI systems by training them exclusively on AI-generated data.” As more AI-created content floods the internet, will ChatGPT begin to self-cannibalize, increasingly training itself on computer-generated text rather than on what’s made by humans?

ChatGPT can steal and mimic — but I believe it will never create anything original. It will never be human. Still, it exacts a human cost. Much to the chagrin of some students, essay writing is an important part of understanding and synthesizing information. And we are in school, after all, to gain that understanding. Maybe it’s the journalist in me, but I would never want to pass off someone else’s writing as my own. Especially not writing that has the ethical and legal concerns inherent in ChatGPT. — Samantha Roehl

About the Authors

Roberta Michnick Golinkoff is Unidel H. Rodney Sharp Professor at the University of Delaware and the co-author of Making Schools Work.

Joshua Wilson is an associate professor of education at the University of Delaware and co-facilitator of its AI for Teaching and Learning Working Group.

Amanda Delgado is a doctoral student at the University of Delaware who is researching children’s engagement and learning from storybook reading and digital media.

Kathy Hirsh-Pasek is a professor of psychology at Temple University and a senior fellow at the Brookings Institution. She is the co-author of Making Schools Work.

Samantha Roehl graduated from Temple University in May. She majored in communication studies and was editor-in-chief of REFINE Magazine, a women’s lifestyle mag on campus.

>> Click here to return to “How Philly Learned to Love AI”

Published in the June 2024 issue of Philadelphia magazine.