EMC² Blog

Gaming the System: 5 Big Problems with AI Detection Software

If Bill Gates is to be believed, then the “Age of Artificial Intelligence” is very much upon us! In fact, the World Economic Forum predicts that AI-powered disruptions throughout all sectors will result in a staggering 85 million jobs being eliminated or rendered obsolete by the year 2025Yet in spite of this massive paradigm shift, the same report projects that AI-powered innovations will result in the creation of some 97 million new jobs in the same time frame. A 2023 report from Goldman Sachs ventures even one step further, suggesting that some 300 million white collar jobs will be disrupted in the coming years thanks to the arrival and radical advancements that are being made possible through the advent of Artificial Intelligence. 

In short: AI is a very, very big deal. And it’s here to stay. Again, to quote the Microsoft founder:

"The development of AI is as fundamental as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone. It will change the way people work, learn, travel, get health care, and communicate with each other. Entire industries will reorient around it. Businesses will distinguish themselves by how well they use it."

In the wake of all this hubbub around all things AI, it’s only natural for educators to turn our eyes to those time-tested cornerstones of classroom “best practices” as we look for guidance to help navigate these uncharted waters. Frustratingly, the jury is still out among some of the largest players in the education landscape — and the reaction to how best to help teachers make sense of all this AI chicanery has been split at best.

On the “pro” AI usage side of the aisle, the International Baccalaureate (IB) program has made it clear that they “will not ban the use of AI software. The simplest reason is that it is an ineffective way to deal with innovation.” That said, there is still quite a bit of work to be done on just how, exactly, this whole AI in the classrooms issue will actually take shape in the days to come. And IB is still very much in the early stages of developing tools, procedures, and teaching strategies to help educators work with students to infuse AI-assisted writing and research in their classrooms while still abiding by the program’s high standards of academic integrity.

Meanwhile, the good people over at The College Board — the folks responsible for the AP Exams and many of the world’s best-known standardized tests — have landed squarely in the “anti” AI camp, writing in no uncertain terms that “Students are categorically prohibited from using any and all Artificial Intelligence tools (e.g. ChatGPT or DALL-E) or essay writing services (e.g. Chegg or Course Hero) to guide, brainstorm, draft, or create student work related to any AP assessment, including written projects and performance tasks.” Naturally, this hard-line stance makes it clear that students in AP classrooms should be well advised that getting caught using AI-assisted writing technologies will come at a steep price. And this, in turn, has led to an inevitable wave of all-new AI detection softwares that are being developed (and sold), ostensibly to help teachers sus out the cheaters and catch students in the act of this latest attempt at cutting edge plagiarism.

Enter Turnitin.com and the rest of the usual suspects, simultaneously riding in on the anti-AI bandwagon while equipping classroom teachers with all sorts of new AI-detection tools (that are, ironically, all powered by AI) to help them stomp out any of those would-be-cheaters in their midst. We’ll dive a bit deeper into these tools’ strengths and shortcomings in just a second.

But with all of these mixed signals and new technologies at play in real time, it seems that there are a series of fundamental questions that have still yet to be answered at the heart of all this commotion: what’s a classroom teacher to do in order to separate what’s authentically student-crafted and what’s simply the product of AI? Was does authentic engagement look like in a world that is rapidly adopting artificial intelligence? And how can educators continue to create rigorous, relevant assessments and classroom experiences in an era where it seems that quite literally everything can be found on the internet or generated by machine in less than a minute’s time? 

While detection software might seem tempting, here are five things to keep in mind.

1. Any Technology is Prone to Make Mistakes

Earlier this week, columnist Geoffrey A. Fowler published an article in The Washington Post titled “We tested a new ChatGPT-detector for teachers. It flagged an innocent student.” Given the rapid growth and real-time fluctuation of these AI writing technologies, that headline feels like something that just about anyone should probably have been able to see coming from a mile away.

When class sizes become too big, it’s only natural for a teacher to lean on whatever automated processes they can get their hands on in order to expedite their work flows. The problem, however, is that no machine is perfect. So when the automated AI detector suggests that any given essay might contain something fishy, a teacher is left with two equally awful options: to blindly trust the machine’s automated scoring, or to set themselves on an even more time consuming goose chase (or is it a witch hunt?) in search of proof that a student has, in fact, genuinely completed their work as assigned. As Fowler asks in his article for the Post: “Turnitin also says its scores should be treated as an indication, not an accusation. Still, will millions of teachers understand they should treat AI scores as anything other than fact?”

Teachers are pressed for time the way it is. And once an automated system tells us there’s a faster route from point A to point B, it’s only natural for many of us to take its advice to heart. Remember that scene from The Office where Michael Scott blindly trusts his rental car’s GPS only to end up driving it directly into a lake? Yup. That’s exactly the sort of issue that can leave teachers (and their students) feeling “all wet” once an automated AI detector spits out a false positive or falsely flags a student for a supposed crime that they simply did not commit.

2. Indications Quickly Become Accusations

In 1968, celebrated Brazilian educator, philosopher and literacy advocate Paolo Freire published his watershed book Pedagogy of the Oppressed. In it, he masterfully outlines the problem with so many traditional structures of education — and notes how the overwhelming majority of these systems don’t actually seek to liberate the individuals who they were ostensibly designed to serve. Instead, argues Freire, we simply continue to feed new bodies into a system that’s really just been cleverly designed to keep the folks in charge, well, in charge. This is exactly the problem when automated AI detection “indication” immediately escalates to what we might like to think of as “teachable moments” — but often feel a whole lot more like flat-out accusations to many of our students. 

Rather than creating an opportunity for dialogue between student and teacher (where compassionate educator pulls the young pupil aside and says something to the effect of “Hey Billy. I think you’re a really talented writer! Can you help me understand a bit more about this piece you’ve written?”), we inadvertently set up entire systems that send a message that feels a heck of a lot more like “Hey Billy. I caught you cheating.”

And what’s actually more important? The conversation or the consequence? Because mistakes can happen, both from the AI detection side of things and from the hastily written student-side of the aisle. But every time a well-meaning educator is left with no choice but to call those individuals out for those inevitable and awkward moments where there’s even a whiff of plagiarism afoot, the message that we send to our students is loud and clear.

“My teacher thinks I’m the kind of student who would cheat.”

The implications here are broad and far-reaching, and made even more concerning when viewed through the lenses of trauma-informed teaching, race, and equity. In short: when a teaching faculty that’s primarily well-educated, middle class, and white acts on information provided by AI-detection software to “police” the inappropriate behavior of students, many of whom have lived experiences that simply don’t look like that of their teachers.

In the words of the ever-quotable Freire: “If the structure does not permit dialogue the structure must be changed”

3. Automated Grading Perpetuates Bad Pedagogy

If there was ever a single institution to defer to when looking for advice on how to effectively teach writing, it feels only logical that we’d start by turning to the National Council of Teachers of English (NCTE). These folks love reading and writing — and have since their founding way back in 1911. And for well over a century, these dedicated professionals have dedicated all sorts of time, effort, and professional research into figuring out the specific “best practices” related to just how, exactly, teachers can most effectively structure their pedagogical approaches so that they are equal parts engaging and driven by the real-life science in order to best support instructional practices to cultivate lifelong writers and readers.

(Incidentally, you can tell that the NCTE is a group of English teachers because even in naming their organization, they used six words when four would have sufficed.)

But as Carol Jago, past president of the NCTE once so eloquently remarked in just twelve short words…

To veteran teachers, we’ve always known that there is something of a dirty little secret when it comes to reading and writing, particularly when it comes to grading all of this stuff. Nightly reading logs are breeding grounds for a myriad of tall tales and flat-out fabrications. Daily gotcha quizzes gave rise to an entire generation who was raised on Cliff’s Notes. And the time-honored practice of teachers tasking students to complete massive take-home writing assignments has led to all manner of — ahem — “creative” ways for students to get those papers written, with less-than-savory workarounds ranging everywhere from private tutors to parent-written term papers to sprawling online marketplaces where pre-published submissions can be bought and sold with the click of a button. Such is life in a world where “if we don’t grade it, they won’t read it” — and such becomes the perpetual story of an increasing disconnect between the work that our students submit and the work that they’ve actually completed themselves. 

Enter the brave new world of Artificial Intelligence, and our students are quickly discovering all sorts of new ways to come to class with their traditional assignments being completed in record time… now with even less effort, time, and original student thought required than ever before. But when we continue to use AI powered detection softwares to “catch students in the act” of (checks notes… wait, that can’t be right)… using AI to write their compositions in the first place, what we’re really doing is just perpetuating a system where we continue to reward those students brave, willing, or clever enough to get a paper turned in that can find its way past the gatekeepers.

Once again, to quote the NCTE’s Position Statement on Writing Instruction in School:

Writing instruction often mirrors test preparation, with students filling in templates and following formulas rather than making important and intentional decisions about writing for authentic audiences and purposes. This kind of writing instruction focuses almost exclusively on “the production of first and final drafts with less scope for an elaborated writing process” (Applebee & Langer, 2009, p. 24).

As our world — and our classrooms — become increasingly automated, teaching students to outsource their writing to highly automated robots creates an entire generation of canned, flat, and formulaic writers and thinkers. And if the latest research from hiring orgs ranging from LinkedIn to Google to The World Economic Forum is to be believed, training an entire generation of students to “think inside the box” is exactly the sort of short-sighted approach that will ultimately end up training our students to auto-write themselves right out of a job.

But so long as we continue to value the finished PRODUCT over the writing and creation PROCESS, this is exactly and inevitably the sort of scenario that we will continue to see in our classrooms. And while AI-powered detection tools promise all sorts of machine-like speed and efficiency — they don’t actually solve the deeper problem one bit. 

4. It Escalates a War of "Us vs. Them"

While Comedy Central’s long running animated series South Park might not be anywhere near school appropriate — there’s no denying the fact that the show has made a tremendous impact on pop culture for the better part of the past three decades thanks in no small part to its timely (if often vulgar) social commentary and unabashed willingness to poke fun at some of the biggest issues in the cultural zeitgeist. Perhaps it should come as no surprise that season 26, episode 4 of the show decided to turn a satiric eye towards all the hubbub surrounding the sudden explosion of all things artificial intelligence. And in this episode titled “Deep Learning” (and fittingly co-written by ChatGPT itself), the series’ well-meaning but perpetually self-centered fourth grade teacher Mr. Garrison stumbles upon the existence of ChatGPT and quickly realizes he can use it behind the scenes to help him “grade all of these stupid papers all the time.”

Yikes. Talk about a red flag.

The running joke throughout the episode is that Mr. Garrison remains blissfully unaware that the overwhelming majority of those same papers have, of course, been written by students in his classroom who are using ChatGPT (“Maybe I taught them a little too well?” he opines). And as the inevitably ridiculous hijinks continue to unfold throughout the regularly scheduled academic year, both Mr. Garrison and his fourth graders start to become increasingly paranoid that they will end up getting caught by the other for having found a way to game the system to their unfair advantage.

As the old saying goes: “do as I say, not as I do,” right?

To be clear: telling students that they have no business using AI software in our classrooms is both academically noble and very well intentioned. But once teachers immediately turn around and use AI powered software themselves in order to stay one step ahead of our students (or worse, to use it with the express purpose of finding ways to punish them), we’re only furthering the divide of what’s fair game for we “hard working” teachers but off limits for all of those “lazy and entitled” kids. And this reeks of the exact sort of hypocrisy that can forever turn a student against an entire system of education.

5. There is a Better Way to Play

AI-powered scoring software might seem shiny and lustrous at the first glance, but when we look closer it ultimately amounts to little more than fool’s gold. The sage wisdom to “Trust, but verify” feels like an apt reminder that all the cold mechanical efficiency in the world can and will never replace the most fundamental key to success that you’ll find at the heart of every great classroom on the planet: and that’s a passionate educator who genuinely loves the work that they do, and an individual who eschews the shortcuts in favor of dedicating their efforts to cultivating authentic, lasting relationships with the students who they serve.

We need to stop perpetuating a system where our students are only driven by fear of punishment and cost-benefit analysis to turn in something passable enough to earn them the highest grade. While it’s easy to dismiss gamification as just another passing fad in a world full of silver bullets for education — we consistently find that, when done properly, authentically gamified instruction goes light years beyond the bells and whistles of brightly colored guessing games and digital Skinner Boxes. At EMC² Learning, we help teachers design ecosystems powered by student autonomy, voice and choice. We create game-inspired techniques that can remove barriers and find new ways to award and incentivize unprecedented levels of student mastery. And we take the most exciting, addictive, and engaging elements that are hard at work (and at play) in the type of fun and games that our students are wild about in the world outside of the classroom to construct endless new invitations for young learners to return to the table and play with us again and again. To wit: we help teachers build the academic equivalent of a Trojan Horse for deep learning.

At EMC² Learning, we simply believe there is a much better way for everyone to play. And that’s why we lean so heavily into creating fully editable resources and student-centered pedagogical approaches that leverage all the very best things that are intrinsically motivating to our learners rather than simply cooking up elaborate systems where they’re merely mashing buttons to lock in the top score.

We say it time and time again , but it bears repeating: when a person is captivated, they don’t need to be held captive.

The activities featured in this blog post are just a handful of the 600+ resources available in the EMC² Learning library. This entire library is available to Engagement Engineers and members of the Creative Corps, and is included with your annual site membership. We hope you’ll consider joining us as an Engagement Engineer to unlock a full year of site access. For complete details including our exclusive limited time offer for annual site membership, click here.

Facebook
Twitter
LinkedIn
Email
"This site is a total game changer for both me and my students! Thanks for all the ways you level up my learning and classroom."
Carol McLaughlin
1-8 Teacher

Become a Member Today!

Scroll to Top