In designing a new artificial intelligence curriculum, the product developers and leaders at LEGO Education quickly ran into a major challenge: Defining “AI literacy.”
The world of computer science and AI is moving so fast, said Head of Product Experience, Andrew Sliwinski, that it’s hard to keep up for makers of education products that aim to teach students about AI.
Sliwinski said his team looked to external partners for help, including his former employer, the Massachusetts Institute of Technology, along with Tufts University and the Computer Science Teachers Association. CSTA is in the process of revising its K-12 computer science standards, which will be released this summer.
About This Analyst
Andrew Sliwinski is the vice president and head of product experience for LEGO Education, where he is responsible for the strategy, design, development, testing, launch, and day-to-day management of LEGO Education’s products. He previously was research scientist at the Massachusetts Institute of Technology, director of learning products at the Mozilla Foundation, and co-founded the learning community DIY.org. He has worked with a wide range of educational organizations during his career including The Aspen Institute, Google.org, Bill & Melinda Gates Foundation, and John S. and James L. Knight Foundation.
LEGO Education last month announced that its new computer science and artificial intelligence curriculum and hands-on kits will debut in April for K-12 classrooms. LEGO Education Computer Science and AI is designed for students to collaborate in groups of four as teachers lead hands-on lessons using LEGO bricks and other hardware. Kits are designed for grades K-2, 3-5 and 6-8.
There was real confusion and tension around what we think is important for children to learn at these age groups.
Andrew Sliwinski
The release follows last year’s launch of LEGO Education Science. Both reflect the company’s move beyond its typical offerings of supplemental science, engineering, technology, and mathematics materials. The introduction of the computer science and AI products coincides with the retirement of LEGO’s portfolio of Spike products.
EdWeek Market Brief spoke to Sliwinski about the work of defining AI literacy, what education vendors get wrong when developing these products, and the questions they should be asking to better support their end users.
The following has been edited for length and clarity.
Tell me about the thinking behind LEGO’s new AI literacy framework.
There’s a lot of AI FOMO [or fear of missing out] going on in the industry. Because of that, it causes this instinct where companies often feel like they need to invent something because what they’re looking for isn’t exactly perfect, or it isn’t out there just yet. You see that a lot with this proliferation of 100 different frameworks and 100 different standards and 100 different [industry] bodies, all overlapping in the same space right now.
Have you heard the term “cow path?” If you go to a college campus or a busy area where you have lots of intersecting sidewalks, you’ll often see a “cow path” cut into the grass where all of the people are actually walking. In businesses, and particularly in education right now, there’s always this feeling [that] we have to pick a direction and then go that way. People forget to look at where people are already walking and focus on that, and learn from that, and listen to that, and try to build around what the audience is trying to tell you.
We looked at what’s going on, all these different frameworks, and tried to say, “What’s getting the most traction with teachers?” We found a couple places where we felt like there was that strong connection to the practitioner side of things and based it around that.
Why was it important to do this collaboratively with organizations like CTSA and universities?
There’s lots of different types of academic partners. You have the folks that are trying to figure out some of the really deep, important, critical ideas around privacy and safety, and what are children learning, and what are the developmental or the social or emotional side effects of [generative] AI systems.
Then there are the folks that are doing more applied research in the trenches, in the field, with teachers and with kids. A great example of that is Tufts University, which we brought in as a partner throughout the whole product development process because of how deeply connected they are to a community of teachers in Boston. We just found that so invaluable throughout the process.
We could [ask], “What about this idea?” And they [might say], “That’s a terrible idea because you’re not thinking about this reality of an elementary school teacher in Somerville, Mass.”
Or [they might say], “I’m going to go test it,” and then they come back and create those tight feedback loops that are so important to build the right thing.
Get Exclusive Intel at the EdWeek Market Brief Fall Summit
Education company officials navigating a changing K-12 market should join our in-person summit, Nov. 11-13 in Nashville. You’ll hear from school district leaders on their biggest needs, and get access to original data, hands-on interactive workshops, and peer-to-peer networking.
What were the hardest points of disagreement when your team was building consensus around these standards?
There was real confusion and tension around what we think is important for children to learn at these age groups. And we focus on K-8. [There were] tensions like, “Should probability sit here, or should it be here?” When was it developmentally appropriate to hit different standards?
I was really bullish on bringing some of the essence of probability earlier in ages, but that doesn’t connect with common core math standards, where you hit probability and statistics when you’re in high school. So it was finding some of those alignments, [and] once we got a clear idea of what we want children to learn, finding where in the scope and sequence it makes sense for them to learn it. There was a lot of back and forth on that.
There was a kind of trifecta that we used early on in the process. Do we want to teach children how to use AI? Do we want children to learn how to train AI? Or do we want to teach children how to build AI? That paradigm was really effective for us early on, and it caused a lot of disagreements and a lot of debates. It’s become a little bit irrelevant in the year 2025 and 2026 because [generative] AI has moved so fast.
Do you see these standards as prescriptive or more of a shared language that the field can build on?
I think it builds very much so on the strong CSTA standards that were there for computer science prior. It was very thoughtful [that] AI is a branch of computer science. That is a fact which I agree with.
Incrementally, on top of computer science, [the question is] what do children need to learn about AI? So I think it was done very thoughtfully in a field where a lot of people ran around like chickens with their heads cut off trying to chase AI policies. I think the CSTA did a really nice job of taking a deep breath and then situating AI in the context of computer science, which I think was the right call.
Our entire product development process starts with the learning outcome. [You must] articulate the learning outcome, and we often use standards to do that. When we develop a science product, we look at … whatever the relevant standards are, and we start with that.
It was fairly straightforward because computer science standards, [while] they certainly change faster than science, they’re more stable. In the AI space, we felt like there was this lack of clarity of, what do you actually want children to learn?
How will the new framework on AI literacy serve not just students, but also educators?
If you don’t bring the teachers along, then none of this works. From so much of our testing, we know that often the least confident person in the room when it comes to a 45-minute AI or computer science lesson is actually the teacher.
This is getting harder, rather than easier, right now because as states and districts transform computer science, particularly in elementary, to being a [mandatory] core subject — often that is falling on the shoulders of generalist classroom educators who don’t have a background in computer science, or much less AI.
A big part of what we had to focus on was [that] we also need to bring these ideas to the teacher and then support them to bring it to the students. That translates into the way that we think about the standards [and] into the types of industry bodies that we were paying the most attention to. We ended up really focusing on the Computer Science Teachers Association because in their standards development process, they were engaging with educators.
Where are education vendors most likely to get AI literacy wrong right now?
There [are] so many companies that are focused on teaching children how to use AI tools, but I would love to see companies in the industry, more broadly, spend just as much effort on helping children understand how these things work and empowering them to build with them.
Give them the tools to build with these technologies because [large language models are] moving so fast that there’s this sense that AI is this unstoppable tidal wave coming to wash away human relevance. But what that misses is that children are incredibly capable people, and they are actually the ones that are going to have to invent the next technologies and build things with this.
There’s so much opportunity in focusing on helping children understand how these technologies actually work. Because it’s not magic. It’s just probability and statistics and math.
The other thing, and I say this to my team constantly, is never let the tail wag the dog. It’s particularly pervasive with AI right now because of the [fear of missing out]. What I mean by that is never let the tech lead the learning. Never let the tech lead the child. Never let the tech lead the teacher. The tech is there to support the learning. And boy, have we lost track of that in our industry right now with AI in particular.
There’s a lot of focus with these tools — either way over toward the child and not thinking about the teacher, or way over towards the teacher and not thinking enough about the child. And I think that predates AI, but it’s pulled us away from who our actual customers are.
What’s missing from existing frameworks or conversations around AI literacy within the K-12 space?
A year or two ago, it was like every day somebody was establishing some new think tank or some new group around AI standards or AI principles or frameworks. It was just kind of framework madness for a little bit. But I saw things falling into three buckets.
On the one hand, we had this big emphasis on vocational applications of AI. It was this focus on, “Let’s teach children how to use generative AI systems because this is important for workforce.” And that’s really important — teaching kids how to use generative AI systems, how to navigate them safely, how to use them effectively.
The other side of what we saw was a lot of frameworks being developed around the defensive side. So, “Let’s protect children from the harms of generative AI.”
The third part was a little bit more difficult to navigate, and that’s where we wanted to focus, which was, “Let’s help children understand how these systems work and empower them to use them and to build them.”
What should product teams be asking themselves if they want to claim their tools support AI literacy, but don’t want to overpromise?
We should hold ourselves to a higher bar. If you start with understanding what you want children to learn, and then you develop products, and during the product development process, you hold yourself accountable to showing evidence that they are learning those things, then you should be able to answer that question.
As an industry, we need to do better. If you’re saying that your product develops AI literacy, then you should have a really clear definition of what AI literacy means, and then you should develop your product from the very beginning to deliver that.
What signals should vendors watch out for that AI literacy is moving from nice to have to must-have?
The tech is moving faster than research and regulation, and the tech is also moving faster than policy, so it’s a little tricky. You can monitor state-level and local-level regulations and policy things that are happening. But I think in some ways, it’s almost like a trailing indicator. What we found really helpful is just talking with schools, particularly superintendents and school administrators that are in some of these early adopter districts that are approaching this very thoughtfully, and just asking them, who are you watching? Who did you guys get inspiration from?
I was in Georgia about a month ago, and I was with these incredible educators. They showed me the AI policy they had built. And it was incredibly thoughtful. It was probably the best I’d ever seen in the U.S. And I just asked them, where did you get all this inspiration from? They pointed me to like 10 different schools around the U.S. that they thought were doing things in a really thoughtful way. So I think some of just trying to follow that and be curious about that.
How often do you expect AI literacy standards to need updating, given how fast the technology is changing?
I have two perspectives on that. One is that math changes once every few hundred years. Science changes once every decade. We decide that Pluto is not a planet or dinosaurs had feathers. But computer science changes every day. And so it is, just by its very nature, a different space.
I do think that it makes it difficult, but I want to make sure that our industry doesn’t use it as an excuse to sort of abdicate responsibility for focusing on learning outcomes when you’re developing the product. Just because the standards are moving doesn’t mean that you shouldn’t design your products to meet them.
The other perspective is the fundamental ideas that underlie ChatGPT and [large language models] and the transformer model, and all of this stuff. A lot of those foundational ideas have been the same for decades — like statistics, probability, data collection, machine sensing, and algorithmic bias. Not a single one of those things was created by ChatGPT. All of that goes back to the 2000s, the 1990s, the 1980s, and even the 1970s.
One of the benefits of focusing down at that lower level on the foundational ideas that undercut all of the AI systems is that, actually, those ideas don’t change that fast. I’m not saying that they won’t change, but history has shown us for the last few decades that those fundamental ideas are pretty stable.
Are there AI capabilities you think don’t belong in the K-12 classroom yet, no matter how sophisticated they are?
There isn’t one right answer for this. A lot of it is what’s right for that community, that school, that teacher, that parent, or that child. So I think we’re seeing that play out in super-fast pace right now.
When we were designing the products and thinking through what we wanted to do in this space, we came up with a bunch of, what we call, the red lines. So these lines that we wouldn’t cross because we felt like if we had, then we wouldn’t be able to live up to our values. One of them is we decided not to embed the [AI] generation of text or media into our products.
We never anthropomorphize AI. So we never give AI a face or a name or describe it as creative. Creativity is for humans.
Andrew Sliwinski
There are many, many examples of schools doing that well. Folks in that space are really trying to do a great job and think very thoughtfully about how that can be done. But at the same time, generative AI systems have some inherent risks and some inherent challenges, like jailbreaking. Generative AI systems can generally be made safer, but they can’t be made safe, at least most of them. So we decided to take a difficult step of not bringing those into our product.
Are there any other features you try to stay away from in designing AI products for children?
Another good example is we never anthropomorphize AI. So we never give AI a face or a name or describe it as creative. Creativity is for humans. What we find is even some of the early research shows us that if children think that AI has a human-like intelligence, it can have a variety of side effects that can be detrimental. So for us, it’s really important that we never cross that line, and that we actually support children with understanding that AI has machine-like intelligence, not human-like intelligence, and that it was trained and created.
That might sound like a subtlety, but I think when we start to look at things like children feeling more comfortable sharing something difficult with a [large learning model] than they do with a trusted teacher or a parent, some of those behavioral side effects are in some ways connected to that idea that the AI has a human-like intelligence.
