Los Angeles Unified and AI: The lessons aren’t what they seem

A few weeks ago, in a blog post about AI, I wrote “I have a strong sense that the traditional education system is not meeting this moment…” I was thinking about how traditional districts are slowed by bureaucracy, rules, and norms, such that they would not adopt AI as quickly as private schools and other non-education sectors, and that when they did adopt AI, they would be more likely to run into problems.
 
So, when reports appeared recently of Los Angeles Unified School District trying and failing at an early AI implementation, my first thought was that this fit my earlier narrative perfectly. The country’s second largest school district had tried, and flopped, and got itself into the New York Times and plenty of other media outlets as a poster child for caution, at least.
 
But as I looked more deeply into the stories, my opinion has changed. In fact, now I see a story that is a (mostly) reasonable attempt to work with AI. Yes it failed—but in tech and innovation failure is celebrated as a temporary setback towards ultimate success.
 
Background
The New York Times article “A.I. ‘Friend’ for Public School Students Falls Flat” explains how “An A.I. platform named ‘Ed’ was supposed to be an ‘educational friend’ to” students in LAUSD, helping students with academic and mental health resources, and also interacting with parents about their children’s attendance and test scores.”
 
The problem? LAUSD hired a start-up company (AllHere Education) to create Ed, and the company has since furloughed most of its staff while the founder and CEO has “left her role.” According to the article, the project, which the company won in a competitive build, “represented a vast and unwieldy challenge for the start-up.” Compounding the problem is that the LAUSD’s superintendent talked about Ed at a presentation at the ASU+GSV technology conference, and while I don’t know what he said, I would lay down a hefty wager that it was long on promise and short on potential pitfalls.
 
Many subsequent articles have delved into the situation. Most of them are critical of LAUSD. Some of those criticisms are valid, but I think the overall media message is far too negative, with the large majority of the interpretations skewing negative when there are other ways to look at the situation. For the rest of this post, I’m going to focus on what I see as the overly negative stance—with the full acknowledgement that, as they say, “mistakes were made.”
 
Digging deeper
The74’s article Turmoil Surrounds LA’s New AI Student Chatbot as Tech Firm Furloughs Staff Just 3 Months After Launch makes a couple of key points.
 
First, the article (and title) states that the furloughs happened just after launch. That’s true but misleading, because it ignores that the contract was signed in July 2023, which means that discussions almost certainly began in early 2023 at the very latest, and more likely sometime in 2022. This was not a case of the district signing with a company that ran into major problems very soon after, which the headlines imply.
 
Second, this article and others lead with the contract being for $6m, which is true. But they also say that about $2m has been paid out. Presumably the remaining funds will be held back based on lack of performance, and likely won’t be paid. A two million dollar mistake isn’t great, but it’s certainly much smaller than a $6m mistake.
 
But was it a mistake?
Another point that comes through in several of the reports is that the LAUSD contract was far larger than any other contracts held by the company.
 
Dan Meyer has a valuable take on this perspective, saying “this seems like much more of a “startup” story than an “artificial intelligence” story to me.” He’s right, in a very real and important sense. LAUSD made a bad bet on a company that wasn’t nearly ready.
 
But at every ed tech conference, an ongoing theme is that education needs to be more innovative. “Being more innovative” inherently means taking risks. Taking risks means you are going to lose sometimes.
 
That’s the whole approach of celebrated investors and incubators like Y Combinator. They know most of their investments are going to fail. In the post-secondary space, Arizona State University is celebrated for innovation, and it has plenty of failures on its path to overall success. Betting on small companies and early stage technologies is a feature of this approach, not a bug.
 
They should have proceeded more slowly…wait, what?
Education Week’s Los Angeles Unified’s AI Meltdown: 5 Ways Districts Can Avoid the Same Mistakes makes a common point, quoting several outside people and examples to make the case that LAUSD moved too fast. Instead, they should have piloted, tested, etc.
 
Here’s one example from that article, regarding a chatbot at Georgia Tech, which is meant as a counterpoint to the LAUSD process:
 
“That project began in 2016. About eight years later, the bot is still being put through its paces. It’s only been used in about 60 or 70 of the institution’s roughly 3,000 classes. Instead of going big quickly, Goel and his team have methodically used teacher feedback to improve the tool.”
 
For those keeping score at home, that’s eight years to reach 2% of the classes at Georgia Tech. Is that really supposed to be an example of how this should be done? At that rate full implementation would take 400 years!
 
Hindsight makes it easy to say that a district should have gone slower, or faster. But the examples of “slower” being given in some cases are hardly exemplars.
 
Three final thoughts
An EdSurge article linked to the LAUSD contract for the chatbot services. I’m no expert on what a contract should say, but I was struck by how detailed the listed services are.
 
No contract can fully protect a district—or anyone—if one party simply fails to meet its obligations. In my experience, this is a reality that is understood by everyone who has run an organization of any size, and often missed by anyone who watches from the outside, never having had to make consequential organizational decisions.
 
Contracts set boundaries and guardrails and create consequences, but they are also based on norms and ethics. From Enron to Theranos to Bernie Maidoff, it’s abundantly clear that contracts exist alongside trust and ethics—they don't fully take the place of such things.
 
That’s not to say that AllHere Education was unethical. I have no insights beyond what anyone can find with some web searches. But it’s clear that the company was well regarded, receiving extensive accolades. And the founder had a background in education, which I take as a highly positive sign for any ed tech startup.

There are many ways that districts may fail in their quest to use AI. They may go too slow, or too fast. Or if they are going in the wrong direction, then speed doesn’t matter. Anyone who tells you they know exactly what is going to happen with AI is lying—and that’s without even taking into account the web of bureaucracy and regulations and politics that school districts have to navigate when considering major changes.
 
Some districts will undoubtedly fail, and some will fail in ways that deserve criticism. It’s incumbent on observers and advisors to analyze each situation before jumping on critical bandwagons. In my mind, in the case of LAUSD and the chatbot, the criticisms have largely been overblown and overwrought. Every time someone says that education needs to innovate, we need to recognize that failures will happen. When those failures happen, we sometimes need to acknowledge, learn, and move on—not pile on.

Previous
Previous

Why ed tech’s 95% problem doesn’t apply to digital learning

Next
Next

Are district online schools closing?