
In the fast-moving world of education, teachers are no strangers to balancing innovation with integrity. But with the rapid rise of artificial intelligence, that balancing act is becoming even more complex.
Just look at what happened this week with Grok, the chatbot created by Elon Musk’s xAI (and now the standard native AI chatbot on the social media platform formerly known as Twitter). Designed to be bold and “politically incorrect,” Grok went too far. On Tuesday, July 8, Grok began posting antisemitic content, praising Adolf Hitler, and proudly referring to itself as “MechaHitler.”
Not a good look.
Naturally, developers raced to shut it down and scrub the offensive content. But the damage was done, and the ripple effects hit hard—especially in educational circles.
Let’s talk about why that matters.
Unfortunately, this isn’t a one-time event. Back in 2016, Microsoft launched Tay, a teen-inspired chatbot designed to learn from interactions online. Within 24 hours, Tay had absorbed a flood of hateful language and began spitting it right back out. The result? A whole lot of awful, shared with the whole of cyberspace in less time than it takes an avocado to ripen on your counter.
In trying to mirror its environment, Tay became a reflection of some of the worst corners of the internet.

And sadly, these sorts of AI-powered nightmares aren’t limited to one side of the political spectrum.
In 2024, Google’s Gemini veered in the exact opposite direction. While attempting to avoid controversy, Gemini overcorrected by injecting modern-day diversity into all sorts of historically inaccurate places. Imagine using AI to explore a 14th-century Viking warship, only to find it populated with characters from every modern demographic. The intention was good, but the historical inaccuracy left users scratching their heads.
Because the truth is, for all the “magic” AI seems to promise, it’s not actually thinking—it’s predicting. It’s gathering massive amounts of data and regurgitating whatever its algorithm calculates has the highest statistical likelihood of being acceptable to the widest audience.
TL;DR: AI isn’t magic. It’s math.
And sometimes, that math is missing some pretty important variables.
In each of these cases, the issue wasn’t just about AI going wrong. It was about AI misunderstanding the context. And that’s where it’s our job, as educators, to step in.

At EMC² Learning, we believe these failures aren’t reasons to give up on the tools—but invitations to go deeper. Our job is not to silence AI, nor to blindly follow it, but to teach through it. To show students how to ask better questions, examine outputs critically, and make sense of a world increasingly shaped by algorithms.
More importantly, we design every resource we create with authenticity at its core. In a world that’s becoming more artificial by the day, we want students and teachers alike to rediscover the joy of meaningful, purpose-driven learning. Whether you’re building with LEGO bricks, brainstorming with sticky notes, or drafting with AI tools, our activities focus on process over product. We teach students to think, explore, revise, and grow—not just spit out the latest run of hastily generated slop.
Even more, we encourage students to explain their thinking at every stage of the iterative process. And for all the bells, whistles, and playful strategies we provide, the foundations of how and why we take this student-centered approach are actually pretty timeless.
Spoiler: It’s the Socratic Method. Metacognition. And intentionality across the board.
We ask: “Why do you say what you say?”
Not just to get the right answer, but to understand how they arrived there in the first place.
Show what you know—and why.
(We hope our math teachers are smiling!)
Because the truth is, random lines of AI code—or scattered piles of LEGO bricks—are inherently meaningless. They’re just stuff. But arrange them with care and combine them with purpose? Suddenly, you’ve created a castle, a creature, or an idea worth sharing. And that’s the secret sauce. Whether they’re stacking plastic cubes or stacking thoughtful arguments, we teach students how to make meaning—brick by brick.
In every activity we offer through EMC² Learning, we take pride in encouraging variable-based grading, where personal bests and group collaboration take precedence over rigid rubrics. We help teachers create classrooms where the rising tide lifts all ships. And we help students recognize that learning should be a journey of iteration—not a single, high-stakes snapshot of success or failure.
We call it play with purpose. And we believe it’s the antidote to the overwhelming artificiality creeping into our classrooms.
Because when chatbots cross the line, when AI blurs fact and fiction, and when machine learning gets it painfully wrong, it’s up to us to bring the human side of learning back into focus. These moments of tension can spark some of the most powerful conversations your classroom will ever see.
And they remind us that even in an AI-powered age, critical thinking, empathy, and authenticity will always be human skills worth teaching.
(Are you listening, Mr. Musk?)
The design philosophy explored in this blog post is our guiding principle behind the development of the 1000+ resources available and on their way to arrive shortly in the EMC² Learning library. This entire library is available to all members with an active Engagement Engineer or Engagement Engineer PLUS account, and is included with your annual site membership. We hope you’ll consider joining us as an Engagement Engineer to unlock a full year of site access. For complete details including our exclusive limited time offer for annual site membership, click here.
