Sevanna Shaverdian

Cheating or Surviving? How AI’s Rise Turned Learning Into a Moral Meltdown

405
0

By Sevanna Emma Shaverdian

Last semester, I watched a classmate stare at her blank Google Doc before she finally whispered, “Can’t I just ask ChatGPT to write an outline for my final paper?” I looked at her with a blank expression and a smile, recognizing she didn’t want to cheat; she was simply defeated. I got exactly how she felt, attempting to navigate the academic pressure of being perfect in a world that suddenly expected us to keep up with technology we barely comprehend.

Students on college campuses all around the country are experiencing the same thing. We were thrown into a world that has been extensively transformed by artificial intelligence, and we were told to “figure it out”. With zero instructions, we received hundreds of emails, reminders, and professors’ warnings not to use AI. However, curiosity got the best of us. Our institutions never really taught us how to use AI responsibly; they just created their own “monitored” versions.

When generative AI first emerged and gained popularity in 2023, it felt almost unreal. There was a “better” Google that would instantly give you answers, write essays, generate notes and summarize readings. But it compromised the fine print of every course syllabus: “cite your sources, produce your own work, don’t cheat, and follow academic honesty guidelines.”

With the launch of AI, we were not confused; we realized this was the moment when the ground beneath education would drastically shift.

Educators built the foundation of instruction on predictability. We were conditioned to learn, memorize and prove our understanding. Then, in less than 6 months, that foundation cracked. Every day, students like me were entering college thinking we would be guided through hardships,  but instead, we experienced mental turbulence. Generative AI platforms disrupted our understanding of learning, and the rules we once knew didn’t seem to matter.

Get the Mirror in your inbox:

We were not ready for that challenge.

Rather than preparing us on how to adapt to the “Age of AI,” most universities responded to generative AI with fear. “Zero Tolerance” policies swept through campus like headline news. Never did we receive a guideline on how to ethically engage with AI, how to check its biases, or how to critically evaluate it. It seemed we were being punished for using the resources that technology provided us.

Entering higher education institutions, we knew there would be challenges, but we didn’t know we would be denied the support we needed to face these challenges.

That’s the essence of this issue.

These institutions constantly remind us that we learn most when we are challenged and supported. Right now, our schools have exacerbated the challenge, confronting students with technological advances and ethical decisions, without providing the guidance and empathy needed to overcome this obstacle.

This consistent fear that the institutions have instilled does not lead to growth. In fact, it leads to secrecy. So students begin to conceal, rationalize and lie.

“Everyone else is using it” or “I only need an outline for this assignment.” In reality, we’re fascinated by AI, but we don’t want to rely on it. I know my work is something to be proud of, but in this “competitive” world, performance and perfection are valued more than creativity and curiosity. AI is not just a fun tool anymore; it’s an instrument for survival.

The complaint us Gen Zers often hear is “Students are lazier nowadays, or “AI thinks for you.” But our actions don’t exist on their own; they are shaped by the stresses and structures around us. The undergraduate experience is filled with stressors like competitive job markets, rising tuition, lack of affordable housing and overworked professors. With all these pressures in our lives, it’s easy to see why students turn to a tool that will help them cope.

So instead of asking us why we are cheating, let’s start asking, “Fundamentally, what is wrong with our system that makes students feel they have to?” Right now, using AI is seen as a failure to the world, somebody with zero creativity. Instead, it should be seen as a symptom of a disease known as “the systems transition period.”

Let’s design better environments to see students thrive. Let’s create a curriculum that offers insight into prompt design, digital ethics and AI engagement. Let’s allow professors to be transparent about their use of AI, too. Let’s stop treating the symptoms and start healing the system internally by designing better learning environments.

So, when people ask, “Why are our students cheating with AI?” maybe the better question is, “Why aren’t our institutions evolved to better assist real-time learning needs?”

College is about adaptation. Adapting to new environments requires both challenge and support. Until the educational institutions recognize this, students will continue to find ways to survive even when the system doesn’t know how to change.

So, is AI really the problem, or is it the absence of guidance?

(Sevanna Shaverdian is an Ed.M candidate at the Harvard Graduate School of Education, focusing on Human Development and Education Technology. Previously, she worked across education, law, and public service to advance equitable access to knowledge. Her research on AI and education examines that the future of education isn’t just digital, it’s deeply human. When she’s not reading or writing about AI in education, she’s probably chasing the perfect iced chai, getting some sun, or playing her cello.)

Get the Mirror-Spectator Weekly in your inbox: