The intersection of artificial intelligence (AI) and education is perhaps one of the most exciting yet challenging frontiers that exist today in the tech world. Insofar as AI evolves deeply into the day-to-day, one begins to question how far this can impact the learning experience. However, how can an AI that was more in line with a general assistant turn to meet the particularized and nuanced world of education? This was the primary question behind Google’s work on the pioneering platform called LearnLM.

In this episode of the Ed-Technical podcast, Libby Hills from the Jacobs Foundation and AI researcher Owen Henkel continue their chat about AI’s role in education with a cohort of Google experts, including members from DeepMind, who shed light on how and why Google developed LearnLM, a family of large, language models (LLMs) for the educational domains. This is the second episode in a three-part mini-series, providing the audience with a look behind the curtain at the technical and conceptual work that went into taking a model beyond just an AI assistant into the realm of a real educational tool.
The Problem with Traditional Large Language Models
At its core, LearnLM proposes a solution to an enormous concern in the AI world: while general-purpose large language models (LLMs) such as GPT-3 or BERT are fascinating in their versatility and conversational capabilities, they are not by design competent at being teachers. Rather, these models are generalized tools meant to generate text, answer questions, perform open-ended questioning, and much more; what they lack is a more nuanced understanding of pedagogy in their teaching capabilities.
One of the terrors for Google in developing LearnLM was creating an AI that not only could talk to learners but also could display and transmit effective teaching practices. Teaching cannot simply be described as an information dump; rather, it is about identifying the individual needs of learners, guiding this information in relevant but engaging ways, and catering to different learning styles. Conventional LLMs are pretty ill-suited for doing just that.
As a result, Google decided to take a novel approach, creating a version of LLM that could serve as a starting point for education-focused products, capable of delivering lessons, tutoring, and even providing personalized learning experiences.
Transforming a Generic AI into an Educational Tool
The key to LearnLM’s development was the fine-tuning process that allowed Google’s engineers and researchers to mold a general-purpose model into one that could perform educational tasks. Fine-tuning is the process of refining a large, pre-trained model to excel in a specific domain by training it further on domain-specific data. In the case of LearnLM, this meant equipping the model with pedagogical behaviors—the very traits that make teaching effective.
Libby and Owen speak with Muktha Ananda, Engineering Leader for Learning and Education at Google, who explains that this transformation was no simple task. Muktha’s team, which works across multiple Google products like Classroom, Search, and YouTube, focused on creating a horizontal AI technology that could be applied across various educational surfaces. This cross-platform approach meant that LearnLM had to be versatile enough to work in different contexts, whether in an online course, a self-paced learning environment, or a tutoring application.
One of the challenges of this approach was ensuring that the AI could adapt to the specific needs of learners. Educational AI cannot simply provide the correct answer—it must engage students in a way that makes learning stick. Teachers know that feedback, empathy, and adaptation are essential to effective teaching. With LearnLM, Google aimed to replicate these features, creating a tool that can not only answer questions but also encourage critical thinking, provide constructive feedback, and create an engaging learning environment.
A Multidisciplinary Effort
To bring LearnLM to life, Google brought together experts from various fields, including AI, psychology, and neuroscience. This multidisciplinary approach helped ensure that the model was not only technically sound but also aligned with how humans learn.
Markus Kunesch, Staff Research Engineer at Google DeepMind, is the tech lead for the AI for Education research program at Google. In the podcast, Markus discusses the ethical considerations that came with designing an AI for education.We were training the model to be able to deal with sensitive educational contexts without furthering biases or misinformation, which was an important aspect of the project. As AI is becoming more generalized in classrooms, there is growing concern regarding the ethics, transparency, and fairness of such tools.
Another key member of the team was Irina Jurenka, Research Lead at Google DeepMind, who worked on furthering generative AI capabilities. She emphasized the importance of cognitive science in the development of LearnLM. Her research background in neuroscience, psychology, and machine learning helped her identify how AI could be more attuned to human cognition and learning processes. Irina and her team worked on improving reasoning in language models, ensuring that LearnLM was not only responsive but also capable of critical thinking, which is essential in education.
The Rigorous Development Process
Creating LearnLM was no easy feat. It required rigorous, iterative testing and constant refinement. Unlike traditional AI applications that are often deployed after a single round of training, LearnLM’s development was a continuous process of gathering feedback from users, educators, and learners, making adjustments, and testing new iterations. This allowed the team to fine-tune the AI’s responses, ensuring it was aligned with educational goals.
In the podcast episode, listeners are given a rare glimpse into the behind-the-scenes work that Google put into LearnLM, including how they addressed challenges like understanding the difference between factual knowledge and contextual knowledge—a critical distinction in the educational context.
Looking Ahead
While LearnLM is still in its early stages, it represents a significant leap forward in the application of AI to education. Google’s ambitious goal of making quality education more accessible is already taking shape through the work of LearnLM, but there is still much to be done. As we look to the future, AI-driven education tools like LearnLM could become essential components of personalized learning experiences in classrooms worldwide.
In the final episode of this mini-series, Libby and Owen will take a step back to explore how we can define tutoring and assess whether an AI tool like LearnLM can truly deliver on its promises. With Google’s continued innovation, we may be on the cusp of a revolution in how AI supports education.
Stay tuned for the next episode in this exciting series.