“What does it mean to have children see themselves as being builders of AI technologies and not just users?” says Shruti.
The program starts out by using a pair of dice to demonstrate probabilistic thinking, a system of decision-making that accounts for uncertainty. Probabilistic thinking underlies the LLMs of today, which predict the most likely next word in a sentence. By teaching a concept like it, the program can help to demystify the workings of LLMs for kids and assist them in understanding that sometimes the model’s choices are not perfect but the result of a series of probabilities.
Students can modify each side of the dice to whatever variable they want. And then they can change how likely each side is to come up when you roll them. Luca thinks it would be “really cool” to incorporate this feature into the design of a Pokémon-like game he is working on. But it can also demonstrate some crucial realities about AI.
Let’s say a teacher wanted to educate students about how bias comes up in AI models. The kids could be told to create a pair of dice and then set each side to a hand of a different skin color. At first, they could set the probability of a white hand at 100%, reflecting a hypothetical situation where there are only images of white people in the data set. When the AI is asked to generate a visual, it produces only white hands.
Then the teacher can have the kids increase the percentage of other skin colors, simulating a more diverse data set. The AI model now produces hands of varying skin colors.
“It was interesting using Little Language Models, because it makes AI into something small [where the students] can grasp what’s going on,” says Helen Mastico, a middle school librarian in Quincy, Massachusetts, who taught a group of eighth graders to use the program.
“You start to see, ‘Oh, this is how bias creeps in,’” says Shruti. “It provides a rich context for educators to start talking about and for kids to imagine, basically, how these things scale to really big levels.”