Monday, June 9, 2025

Another Bad AI Classroom Guide

We have to keep looking at these damned things because they share so many characteristics that we need to recognize so we can recognize them when we see them again and react properly, i.e. by throwing moldy cabbage at them. I read this one so you don't have to.

And this one will turn up lots of places, because it's from the Southern Regional Education Board

SREB was formed in 1948 by governors and legislators; it now involves 16 states and is based in Atlanta. Although it involves legislators from each of the states, some appointed by the governor, it is a non-partisan, nonprofit organization. In 2019 they handled about $18 million in revenue. In 2021, they received at $410K grant from the Gates Foundation. Back in 2022, SREB was a cheerful sock puppet for folks who really wanted to torpedo tenure and teacher pay in North Carolina. 

But hey-- they're all about "helping states advance student achievement." 

SREB's "Guidance for the Use of AI in the K-12 Classroom" has big fat red flag right off the top-- it lists no authors. In this golden age of bullshit and slop, anything that doesn't have an actual human name attached is immediately suspect.

But we can deduce who was more or less behind this-- the SREB Commission on Artificial Education in Education. Sixteen states are represented by sixty policymakers, so we can't know whose hands actually touched this thing, but a few names jump out.

The chair is South Caroline Governor Henry McMaster, and his co-chair is Brad D. Smith, president of Marshall University in West Virginia and former Intuit CEO. As of 2023, he passed Jim Justice as richest guy in WV. And he serves on lots of boards, like Amazon and JPMorgan Chase. Some states (like Oklahoma) sent mostly legislators, while some sent college or high school computer instructors. There are also some additional members including Youngjun Choi (UPS Robotics AI Lab), Kim Majerus (VP US Public Sector Education for Amazon Wen Services) and some other corporate folks.

The guide is brief (18 pages). It's basic pitch is, "AI is going to be part of the working world these students enter, so we need schools to train these future meat widgets so we don't have to." The introductory page (which is certainly bland, vague, and voiceless enough to be a word string generated by AI) offers seven paragraphs that show us where we're headed. I'll paraphrase.

#1: Internet and smartphones means students don't have to know facts. They can just skip to the deep thinking part. But they need critical thinking skills to sort out online sources. How are they supposed to deep and critically think when they don't have a foundation of content knowledge? The guide hasn't thought about that. AI "adds another layer" by doing all the work for them so now they have to be good prompt designers. Which again, would be hard if you didn't know anything and had never thought about the subject.

#2: Jobs will need AI. AI must be seen as a tool. It will do routine tasks, and students will get to engage in "rich and intellectually demanding" assignments. Collaborative creativity! 

#3: It's inevitable. It is a challenge to navigate. Shareholders need guidance to know how to "incorporate AI tools while addressing potential ethical, pedagogical, and practical concerns." I'd say "potential" is holding the weight of a world on its shoulders. "Let's talk about the potential ethical concerns of sticking cocaine in Grandma's morning coffee." Potential.

#4: This document serves as a resource. "It highlights how AI can enhance personalized learning, improve data-driven decision-making, and free up teachers’ time for more meaningful student interactions." Because it's going to go ahead and assume that AI can, in fact, do any of that. Also, "it addresses the potential risks, such as data privacy issues, algorithmic biases, and the importance of maintaining the human element in teaching." See what they did there? The good stuff is a given certainty, but the bad stuff is just a "potential" down side.

#5: There's a "skills and attributes" list in the Appendix.

#6: This is mostly for teachers and admins, but lawmakers could totally use it to write laws, and tech companies could develop tech, and researchers could use it, too! Multitalented document here.

#7: This guide is to make sure that "thoughtful and responsible" AI use makes classrooms hunky and dory.

And with that, we launch into The Four Pillars of AI Use in the Classroom, followed with uses anbd cautions.

Pillar #1
Use AI-infused tools to develop more cognitively demanding tasks that increase student engagement with creative problem-solving and innovative thinking.

"To best prepare students for an ever-evolving workforce..." 

"However, tasks that students will face in their careers will require them..."

That's the pitch. Students will need to be able think "critically and creatively." So they'll need really challenging and "cognitively demanding" assignment. Now more than ever, students need to be creators rather than mere purveyors of knowledge. "Now more than ever, students need to be creators rather than mere purveyors of knowledge."

Okay-- so what does AI have to do with this?
AI draws on a broad spectrum of knowledge and has the power to analyze a wide range of resources not typically available in classrooms.
This is some fine tuned bullshit here, counting on the reader to imagine that they heard something that nobody actually said. AI "draws on" a bunch of "knowledge" in the sense that it sucks up a bunch of strings of words that, to a human, communicate knowledge. But AI doesn't "know" or "understand" any of it. Does it "analyze" the material? Well, in the sense that it breaks the words into tokens and performs complex maths on them, there is a sort of analysis. But AI boosters really, really want you to anthropomorphize AI, to think about it as human-like un nature and not alien and kind of stupid.

"While AI should not be the final step in the creative process, it can effectively serve in the early stages." Really? What is it about the early stages that makes them AI-OK? I get it--up to a point. I've told students that they can lift an idea from somewhere else as long as they make it their own. But is the choice of what to lift any less personal or creative than what one does with it? Sure, Shakespeare borrowed the ideas behind many of his plays, but that decision about what to borrow was part of his process. I'd just like to hear from any of the many people who think AI in beginning stages is okay why exactly they believe that the early stages are somehow less personal or creative or critical thinky than the other stages. What kind of weird value judgment is being made about the various stages of creation?

Use AI to "streamline" lesson planning. Teach critical thinking skills by, and I'm only sort of paraphrasing here, training students to spot the places where AI just gets stuff wrong. 

Use AI to create "interactive simulations." No, don't. Get that AI simulation of an historical figure right out of your classroom. It's creepy, and like much AI, it projects a certainty in its made-up results that it does not deserve. 

Use AI to create a counter-perspective. Or just use other humans.

Cautions? Everyone has to learn to be a good prompt engineer. In other words, humans must adjust themselves to the tool. Let the AI train you. 

Recognize AI bias, or at least recognize it exists. Students must learn to rewrite AI slop so that it sounds like the student and not the AI, although how students develop a voice when they aren't doing all the writing is rather a huge challenge as well. 

Also, when lesson planning, don't forget that AI doesn't know about your state standards. And if you are afraid that AI will replace actual student thinking, make sure your students have thought about stuff before they use the AI. Because the assumption under everything in this guide is that the AI must be used, all the time.

Pillar #2
Use AI to streamline teacher administrative and planning work.

The guide leads with an excuse-- "teachers' jobs have become increasingly more complex." Have they? Compared to when? The guide lists the usual features of teaching-- same ones that were there when I entered the classroom in 1979. I call bullshit. 

But use AI as your "planning partner." I am sad that teachers are out there doing this. It's not a great idea, but for a generation that entered the profession thinking that teacher autonomy was one of those old-timey things, as relevant as those penny-farthings that grampa goes on about. And these suggestions for use. Yikes.

Lesson planning! Brainstorming partner! And, without a trace of irony, a suggestion that you can get more personalized lessons from an impersonal non-living piece of software.

Let it improve and enhance a current assignment. Meh. Maybe, though I don't think it would save you a second of time (unless you didn't check whether AI was making shit up again). 

But "Help with Providing Feedback on and Grading Student Work?" Absolutely not. Never, ever. It cannot assess writing quality, it cannot do plagiarism detection, it cannot reduce grading bias (just replace it). If you think it even "reads" the work, check out this post. Beyond the various ways in which AI is not up to the task, it comes down to this-- why would your students write a work that no other human being was going to read?

Under "others," the guide offers things like drafting parent letters and writing letters of recommendation, and again, for the love of God, do not do this! Use it for translating materials for ESL students? I'm betting translation software would be more reliable. Inventory of supplies? Sure, I'm sure it wouldn't take more than twice as much time as just doing it by eyeball and paper. 

Oh, and maybe someday AI will be able to monitor student behavior and engagement. Yeah, that's not creepy (and improbable) at all.

Cautions include a reminder of AI bias, data privacy concerns, and overreliance on AI tools and decisions, and I'm thinking "cautions" is underselling the issues here. 

Pillar #3
Use AI to support personalized learning.

The guide starts by pointing out that personalized learning is important because students learn differently. Just in case you hadn't heard. That is followed by the same old pitch about dynamically adaptive instruction based on data collected from prior performance, only with "AI" thrown in. Real time! Engagement! Adaptive!

AI can provide special adaptations for students with special needs. Like text-to-speech (is that AI now). Also, intelligent tutoring systems that " can mimic human tutors by offering personalized hints, encouragement and feedback based on each student’s unique needs." So, an imitation of what humans can do better. 

Automated feedback. Predictive analytics to spot when a student is in trouble. AI can pick student teams for you (nope). More of the same.

Cautions? There's a pattern developing. Data privacy and security. AI bias. Overreliance on tech. Too much screen time. Digital divide. Why those last two didn't turn up in the other pillars I don't know. 

Pillar #4
Develop students as ethical and proficient AI users.

I have a question-- is it possible to find ethical ways to use unethical tools? Is there an ethical way to rob a bank? What does ethical totalitarianism look like?

Because AI, particularly Large Language Models, is based on massive theft of other peoples' work. And that's before we get to the massive power and water resources being sucked up by AI. 

But we'll notice another point here-- the problems of ethical AI are all the responsibility of the student users. "Teaching students to use AI ethically is crucial for shaping a future where technology serves humanity’s best interests." You might think that an ethical future for AI might also involve the companies producing it and the lawmakers legislating rules around it, but no-- this is all on students (and remember-- students were not the only audience the guide listed) and by extension, their teachers. 

Uses? Well, the guide is back on the beginning stages of writing
AI can also help organize thoughts and ideas into a coherent outline. AI can recommend logical sequences and suggest sections or headings to include by analyzing the key points a student wants to cover. AI can also offer templates, making it easier for students to create well-structured and focused outlines.

These are all things the writer should be doing. Why the guide thinks using AI to skip the "planning stages" is ethical, but using it in any other stages is not, is a mystery to me.

Students also need to develop "critical media literacy" because the AI is going to crank out well-polished turds, and it's the student's job to spot them. "Our product helps dress you, but sometimes it will punch you in the face. We are not going to fix it. It is your job to learn how to duck."

Cross-disciplinary learning-- use the AI in every class, for different stuff! Also, form a student-led AI ethics committee to help address concerns about students substituting AI for their own thinking. 

Concerns? Bias, again. Data security-- which is, incidentally, also the teacher's responsibility. AI research might have ethical implications. Students also might be tempted to cheat- the solution is for teachers to emphasize integrity. You know, just in case the subject of cheating and integrity has never ever come up in your classroom before. Deepfakes and hallucinations damage the trustworthiness of information, and that's why we are calling for safeguards, restrictions, and solutions from the industry. Ha! Just kidding. Teachers should emphasize that these are bad, and students should watch out for them.

Appendix

A couple of charts showing aptitudes and knowledge needed by teachers and admins. I'm not going to go through all of this. A typical example would be the "knowledge" item-- "Understand AI's potential and what it is and is not" and the is and is not part is absolutely important, and the guide absolutely avoids actually addressing what AI is and is not. That is a basic feature of this guide--it's not just that it doesn't give useful answers, but it fails to ask useful questions. 

It wraps up with the Hess Cognitive Rigor Matrix. Whoopee. It's all just one more example of bad guidance for teachers, but good marketing for the techbros. 



No comments:

Post a Comment

OSZAR »