
Ever since tools like ChatGPT and DeepSeek hit the mainstream, they’ve shaken up everything from office tasks to art generation. Unsurprisingly, students quickly saw the potential — and began using AI to cheat on essays and exams. At first, it felt like a shortcut. But if AI can ace your test, what does that say about the test, or your future job?
AI outperforms real students
A recent University of Reading study showed that ChatGPT could answer modular exam questions so well that it often outscored real students. Out of 33 fake student accounts submitting using the essay writing service, many received higher grades than actual peers — and nearly all submissions (94%) passed as human-written.
AI also demonstrated impressive abstract reasoning, though it still struggled with complex, final-year questions. So while it excels at standard academic tasks, it’s not invincible — yet.
The key findings were:
- AI mimics human writing style – 94% of the AI-generated messages were read like human-written documents. It is sophisticated enough to mimic human writing styles.
- AI gets you better grades – If students were to use (in real life) AI to complete their papers, they’d get better grades. They are quite sophisticated in terms of their real-life counterparts.
- It’s capable of seemingly abstract reasoning– AI can think, learn, and generate answers and results in the mediums we want. But this study finds one specific limitation of the current generation of AI.
However, there was one area where human students performed way better: in advanced, final-year, complex questions.
What does this mean
Published in PLoS One, the findings raise serious concerns about academic integrity. Most plagiarism detectors couldn’t catch the AI submissions, showing that detection tools have a lot of catching up to do.
This creates a tough dilemma: students are rewarded for using AI — but that undermines the value of their education. In the long run, it risks sending unprepared graduates into the workforce.
AI has already changed so many things, and schools and universities may need to start doing things differently sooner rather than later. If exams can’t be trusted, that means we need a new and more comprehensive method for institutes to test the learning progress of students. Perhaps the report was a wake-up call to the processes that academic institutions follow for tests and exam paper evaluations.
But this is easier said than done. It would require embracing AI technology and incorporating student feedback to make the learning process culturally equitable and responsive, and technology has evolved faster than educational standards.
Rethinking Assessments for the AI Era
The goal of education isn’t just to test students — it’s to help them grow intellectually and prepare them for real-world challenges. Exams and assessments should support that purpose, not just reward rote memorization. But with AI now playing a larger role in learning, schools need to rethink how they evaluate knowledge. Traditional essays and multiple-choice tests may no longer be the most effective or relevant tools in this new landscape.
A more modern approach involves aligning assessments with clearly defined learning outcomes that can be measured reliably. Each assignment should measure what students are actually meant to learn — whether that’s applying knowledge, solving real-world problems, or demonstrating critical thinking. This could mean replacing or supplementing essays with more interactive methods like presentations, case studies, e-portfolios, or quizzes that challenge students to think in abstract or creative ways.
Assessment methods also need to diversify. Instead of relying solely on individual assignments, institutions can incorporate collaborative tools such as peer reviews, group projects, or self-assessments. Bringing students into the evaluation process — through feedback loops and classroom discussions — fosters deeper engagement and helps them take ownership of their learning. These formats are also harder for AI to replicate convincingly, offering a built-in check on misuse.
Ethical concerns around AI-generated work are real, but banning the technology isn’t a sustainable answer. Some universities have moved back to in-person exams to prevent AI-enabled cheating, but that’s only part of the solution. A smarter path forward is to design assessments that AI can’t easily master — ones based on abstract reasoning, creativity, and personal reflection. Combined with regular updates to detection tools and clear policies, this approach balances the potential of AI with the need to preserve academic integrity.
AI, Cheating, and the Future of Learning
AI is now outperforming many university students on exams — and in some cases, even rivaling expert evaluators. This understandably raises concerns about academic integrity. But while AI can certainly be misused, it also offers huge potential to support learning. Used ethically, it can serve as a tutor, writing assistant, and self-assessment tool, helping students refine their skills and deepen their understanding.
The answer isn’t to fear AI — it’s to adapt. Institutions need to address these ethical dilemmas head-on by rethinking assessments, improving detection tools, and setting clear guidelines around AI use. Instead of being blindsided by this technology, we can shape how it fits into education. After all, AI isn’t going away. It’s already here — and it can be a powerful ally if used wisely.
Some universities, like Glasgow, have returned to supervised, in-person exams to minimize the risk of AI-assisted cheating. But while this may help for certain cases, it’s not a scalable solution for all coursework. Instead, the academic world must consider a broader redesign of assessment strategies — one that focuses on critical thinking, creativity, and higher-order problem-solving. These are areas where AI still struggles and where human cognition shines.
Ultimately, the fear shouldn’t be about AI replacing human learning — it should be about whether education is evolving fast enough to keep up. Universities must treat AI as both a challenge and an opportunity. If integrated wisely, it can empower students, enrich the learning process, and help institutions deliver more equitable, relevant, and resilient education in the digital age.