Lots of school districts in Bucks County and across Pennsylvania have begun incorporating some sort of Artificial Intelligence training, policies, and even instruction. But it’s still a fuzzy and not fully understood technology.
Parents, students, and taxpayers should be asking—is your school district’s AI policy any good?
Here are some questions to ask about the local AI policy and training.
Where is it coming from?
Companies have invested huge piles of money in developing their AI, so it should come as no surprise that they are also putting lots of money into offering training to K-12 schools, which represent a huge potential market.
It’s important to remember that these companies are not offering training as some sort of public service. Google has been working hard to be a presence in education, and a recent leaked internal document asserts that the point of that work is to create a “pipeline of future users.”
Non-corporate programs, like the University of Pennsylvania’s Pioneering ASI in School Systems (PASS), are a better choice for public school districts. But be alert — even PASS has accepted a million dollar donation from Google.
Corporate backing of AI training and policy isn’t just a matter of companies trying to sneakily build brand loyalty. Most Big Tech discussion of AI assumes the sale, asking questions about how you will use AI and skipping past any questions of if you will — or should.
Are the underlying ethics addressed?
Emphasis is repeatedly placed on AI “ethics.” The Bucks County Beacon survey cited ethics as one of the top topics covered by AI policies. But this is an extremely thorny question.
Two major ethical issues are part of the AI landscape. One is the use of copyrighted materials for training AI systems.
The companies behind the major AI systems have provided a variety of reasons that they don’t want to pay to use the copyrighted work of authors, but in the end, AI is making money through the unpaid use of authors’ work.
The other concern is environmental.
Data centers have become the ultimate NIMBY issue. Elon Musk has launched a data center for Grok in Memphis with unpermitted gas turbines that are filling the area with noise and pollution, and that’s just one example. Companies are targeting areas that have populations without the political clout to fight back. Meanwhile, the demand for power has actually eaten up the available gas turbines, with manufacturing slots booked until at least the end of the decade, straining both the growth of data centers and the maintenance of the national power grid.
In Pennsylvania, as many as 50 data centers are proposed, with some communities like Montour fighting back by refusing zoning changes. Small towns like Archbald, PA (population 7,500) are finding themselves in the data center crosshairs, which promise a drain on water and power along with noise and pollution, but offer very few long-term jobs for the area.
Any program looking at the ethical use of AI needs to consider the issues related to the true costs of operating AI.
What does AI actually do?
Any district training for both students and staff must include a careful look at what AI actually does—and what it does not do.
If a teacher asks a chatbot like ChatGPT to write a lesson plan about Hamlet, here’s what ChatGPT does not do. It does not look through the various lesson plans available on the internet and weigh their different qualities. It does not consult its understanding of the play and then determine which pedagogical techniques might best instruct students on the major themes of the work. It does not “understand” Hamlet or teaching in any conventional sense of the word.
If a student uses ChatGPT to write a paper about Hamlet, the same holds true. It does not “read” or “think about” the play. It does not even necessarily quote the play accurately.
A study from Carnegie Mellon suggests that AI makes individuals more confident, less knowledgeable, and worse at critical thinking.
To trust it to do any of these things accurately is a mistake (as many lawyers can now attest).
Explaining how AI actually works can be complicated, though there are resources out there that can make it clear for laypeople. One of the most useful one-line explanations I’ve seen is this one — when you ask a chatbot to respond to a prompt, what you’re really asking is “What would a response to this look like?” We call AI mistakes “hallucinations,” but these mistakes are not a bug that can be fixed; they are an inevitable result of how AI works. In a sense, all AI products are hallucinations.
Too many people imagine that AI is perfectly objective, but like any other computer program, it holds whatever biases it is fed. The story of a newly-tweaked Grok offering ridiculous praise of Elon Musk (smarter than da Vinci, top physique, better athlete than LeBron) may be amusing, but it’s a reminder that a chatbot can be “taught” to follow whatever bias its owner feeds it.
Too many people, including students and teachers, think of AI as wise and objectively intelligent and magical. And AI marketeers have been happy to feed that illusion, suggesting that AI is like having an expert assistant. It is not, and any responsible school AI education program has to help everyone understand what AI is and is not.
In particular:
AI are not human
Research has shown considerable risk in chatbot interaction with humans in general and young humans in particular. Seeing chatbots as humans, capable of empathy and insights, has led to developmental issues with conversation for humans, Chatbots tend to reinforce whatever humans bring to them, from racist bias to suicidal ideation (suicides linked to chatbots now have their own Wikipedia page). The chatbot ability to mimic human language leads many to conclude that there is human-like intellect behind the language. There is not, and students need to have that truth emphasized clearly and repeatedly.
When is it okay to use AI?
There are several models out there for giving students an idea of what the various levels of AI use can be. Central Bucks’ model of red zone vs. green zone is a workable example. The MIT model that Council Rock is using also provides a useful model for thinking about how and when to use AI.
The school district needs to provide considerable clarity on this issue, as it becomes increasingly clear that the result of student reliance on AI results in far less educational achievement for those students. A study from Carnegie Mellon suggests that AI makes individuals more confident, less knowledgeable, and worse at critical thinking.
With that in mind, districts would do well to set limits for teachers and staff use of AI. Students who regularly see teachers and staff using AI as a shortcut can’t help but absorb the notion that such shortcuts are okay.
Who will be accountable?
Beware the “human in the loop.” Dan Davies in 2024 coined the term “accountability sink.” It refers to the part of a system that collects the blame for system errors, and in many AI systems, the accountability sink is a human. If a school implements an AI grading system for student work, the human in the loop who gets the job of checking all the AI work to make sure it’s accurate—that’s the accountability sink. Somehow, when the AI fails, it is the human in the loop’s fault.
If students violate you’re well-defined AI use policy, whose job is it to catch them? If the goal was to save teacher time and work, it will not help to replace their old workload with a new workload of scanning all the student work for AI violations.
How will data safety be maintained?
Your school district should already be protecting student data, but use of AI increases that risk. Students who come to view chatbots as human are willing to share all manner of personal information with them.
Inevitability, the future, and AI skepticism
It is typical for tech companies to tout their products as inevitable, and schools have been burned before. Around 15 years ago schools were convinced that computers were a necessary part of education, and districts invested millions in one-to-one initiatives that put a screen in front of every student. Now many are questioning the wisdom of that choice, even pointing the rise of screens in school as, at worst, a possible cause of dropping tests scores and, at best, “mostly useless.”
Tech leaders often predict the inevitable adoption of revolutionary technology that is just around the corner (fully self-driving cars have been “a year away” for over a decade). Such predictions should be understood as more marketing than good-faith predictions.
The prediction that AI will soon take over everything should be understood as at least partly the marketing push of an industry that has sunk unimaginable amounts of money in the hopes that the investment will pay off and not turn out to be a huge bubble.
Will the students of today have to learn to use AI to be employable tomorrow? Perhaps. But students should also be exposed to a healthy dose of AI skepticism. A school AI “literacy” program needs to spend a hefty amount of time teaching students how to protect themselves from AI — it’s not objective, it’s not human, it’s not your friend, it’s not Earth-friendly, it’s not all-wise, and it’s not a safe space for you to unburden yourself.
A good AI program in schools should show students how AI works, what it can be used for, and why you might want to use it. It should also show students when not to use AI and why you might choose not to use it at all.