AI and Creative Learning:
Concerns, Opportunities, and Choices
--
As each new wave of technology ripples through society, we need to decide if and how to integrate the technology into our learning environments. That was true with personal computers, then with the internet, and now with artificial intelligence (AI) technologies. For each new technology, there are many different ways that we can integrate the technology into how we teach and learn. These choices are critically important: different choices can have very different outcomes and implications.
How should we make these choices? I think it’s important to carefully consider what type of learning and education we want for our children, our schools, and our society — and then design uses of new technologies that align with our educational values and visions.
What does that mean for the integration of new AI technologies such as ChatGPT into our learning environments?
In my view, the top educational priority in today’s world is for young people to develop as creative, caring, collaborative human beings. With the pace of change accelerating in all parts of the world, today’s children will face a stream of uncertain, unknown, and unpredictable challenges throughout their lives — and the proliferation of new AI technologies will further accelerate the changes and uncertainties. As a result, it’s more important than ever for children from diverse backgrounds to have opportunities to develop the most human of their abilities, the abilities to think creatively, engage empathetically, and work collaboratively, so that they can deal creatively, thoughtfully, and collectively with the challenges of a complex, fast-changing world.
Unfortunately, I find that many of the current uses of AI in education are not aligned with these values — and, in fact, they could further entrench existing educational approaches at a time when significant changes are needed. Too often, today’s AI technologies are used in ways that constrain learner agency, focus on “close-ended” problems, and undervalue human connection and community.
But I also see intriguing opportunities for new generative AI tools, which can generate texts and images in response to natural-language questions and comments. I believe these new AI systems could be used in ways that support project-based, interest-driven creative learning experiences — expanding the ways that learners can imagine new ideas, build on their interests, design new projects, access diverse resources, and get feedback on their ideas. But this will happen only if people make explicit, intentional choices to use the new tools in this way.
In this essay, I’ll start by discussing my concerns about current uses of AI tools in education, then I’ll explore how we might leverage new AI technologies to support creative learning experiences.
Concerns
In most critiques of AI systems, the focus is on problems that were not intended by the developers of the systems (e.g., biases or inaccuracies based on the sets of examples used to train the systems, and inadequate acknowledgement or compensation for the artists and writers whose work is used in the training) and on problems that arise when the systems are used differently than the developers had hoped (e.g., students turning in papers produced by AI systems as if the work were their own). These are important problems and need to be addressed. But in this essay, I have a different focus. I will be discussing why I’m concerned about many AI-in-education systems even when they work exactly how their developers intended and are used exactly how their developers had hoped.
Concern #1: Constraining learner agency
Back in the 1960s, as researchers were beginning to explore how computers might be used in education, there were two primary schools of thought. One focused on using computers to efficiently and effectively deliver instruction to the learner. The other focused on providing learners with opportunities to use technologies to create, experiment, and collaborate on personally meaningful projects. Seymour Papert referred to these two different approaches as instructionist and constructionist.
Over the years, most AI researchers and developers have focused on the first approach, developing “intelligent tutoring systems” or “AI coaches” that provide instruction to students on particular topics, continually adapting the trajectory of the instruction based on student responses to questions. These systems have been promoted as a personalized approach to teaching, aiming to provide each student with customized feedback and instruction based on their current level of understanding, as opposed to a one-size-fits-all approach in which the same instruction is delivered to all students.
With advances in AI technology, these tutoring systems have become more effective in delivering instruction that adapts to individual learners. For example, some AI tutors and AI coaches have demonstrated improved results when they deliver instruction through a virtual character that looks like the student’s favorite teacher or favorite celebrity.
I don’t doubt these research results, but I worry that some of these “improvements” are perpetuating and reinforcing an educational approach that is in need of a major overhaul. To a large degree, AI tutors and coaches have been designed to be in control of the educational process: setting goals, delivering information, posing questions, assessing performance. That’s also the way most classrooms have operated over the past couple centuries. But the realities of today’s world require a different approach: providing students with opportunities to set their own goals, build on their own interests, express their own ideas, develop their own strategies, and feel a sense of control and ownership over their own learning. This type of learner agency is important in students’ development, helping them develop the initiative, motivation, self-confidence, and creativity that will be needed to contribute meaningfully as citizens in their communities.
AI tutors and coaches are promoted as “personal” since they deliver personalized instruction. But in my view, a truly personal approach to learning would give the learner more choice and control over the learning process. I’d like learners to have more control over how they’re learning, what they’re learning, when they’re learning, where they’re learning. When learners have more choice and control, they can build on their interests, so that learning becomes more motivating, more memorable, and more meaningful — and learners make stronger connections with the ideas that they are engaging with.
Some AI tutors and coaches try to support greater learner agency. Instead of controlling the instructional flow, they are designed to provide tips, advice, and support as needed, as students work on problems. But these systems still raise some concerns. Although the systems are less controlling than earlier AI tutors, some people still experience them as intrusive. As one educator wrote: “I would absolutely crumble if I was a middle schooler and had to chat with an AI bot that was trying to pull the answer out of me.” In addition, these AI tutors and coaches still tend to focus on specific types of knowledge and problems. Which leads to my next concern…
Concern #2: Focusing on “close-ended” problems
Over the past decade, there has been a proliferation of websites designed to teach young people how to code. The vast majority of these sites are organized around a series of puzzles, asking students to create a program to move a virtual character past some obstacles to reach a goal. In the process of solving these puzzles, students learn basic coding skills and computer science concepts.
When our Lifelong Kindergarten group at the MIT Media Lab developed Scratch, we took a different approach. With Scratch, young people can create animations, games, and other interactive projects based on their interests — and share them with others in an online community. Through this project-based, interest-driven approach, students still learn important coding skills and computer science concepts, but they learn them in a more motivating and meaningful context, so they make deeper connections with the ideas. At the same time, the project-based, interest-driven approach helps young people develop their design, creativity, communications, and collaboration skills, which are more important than ever in today’s world.
So why do so many coding sites focus on puzzles rather than projects? One reason is that it’s easier to develop AI tutors and coaches to give advice to students as they work on puzzles. With puzzles, there is a clear goal. So as a student works on a puzzle, the AI tutor can analyze how far the student is from the goal, and give suggestions on how to reach the goal. With projects, the student’s goal might not be clear, and might change over time, so it’s more difficult to develop an AI tutor to give advice.
Over the years, most AI tutors and coaches have been designed to provide instruction on problems that are highly structured and well defined. With new AI technologies, there are possibilities for developing systems that could provide feedback and advice on more open-ended projects. But I’ve been disappointed by the way that most AI researchers and EdTech companies are putting these new technologies to use. For example, I recently saw a presentation in which a prominent AI researcher showed off a new ChatGPT-based system that was asking students a list of single-answer questions. The conversational interface was new, but the educational approach was old. And when Khan Academy recently introduced a new AI tutor called Khanmigo, the first example it showed on its website was a multiplication problem involving a fraction (the tutor asked: “What do you think you need to do to multiply 2 by 5/12?”). This type of problem has a single answer and well-defined strategies for getting to the answer — exactly the type of problem that AI tutors have traditionally focused on.
There is an important educational choice: Should schools focus more on open-ended projects or “close-ended” problems? My preference is to put more emphasis on projects, where students have more opportunities to learn to think creatively, express their ideas, and collaborate with others — while still learning important concepts and basic skills, but in a more meaningful and motivating context.
Schools have generally preferred close-ended problems since they are easier to manage and assess. Schools end up valuing what they can most easily assess, rather than figuring out ways to assess the things that are most valuable. I worry that EdTech companies and schools will focus on AI tutors that fit in this same framework and further entrench this educational approach, crowding out much-needed changes in how and what students learn. Instead, as I discuss in the Opportunities section below, I hope that there are more efforts to use new AI technologies to support learners as they engage in project-based, interest-driven learning experiences.
Concern #3: Undervaluing human connection
In some situations, AI tutors and coaches can provide useful advice and information. And with advances in AI technology, these systems are becoming better at deciding what information to deliver when, and customizing the information based on what students have already learned and what misconceptions they might have.
But good teaching involves more than that. A good teacher builds relationships with students, understands students’ motivations, empathizes with students’ concerns, relates to students’ lived experiences, and helps students connect with one another. Facilitating a student’s learning is a subtle process, much more complex than simply delivering information and instruction at the right time. A good teacher understands how to cultivate a caring community among students, so that students feel welcomed, understood, and supported. A good teacher understands how to create an environment in which students feel comfortable taking the risks that are an essential part of a creative learning process.
Some AI tutors and coaches now try to take social-emotional factors into account — for example, using sensors and cameras to get a sense of a student’s emotional state. But these AI systems still aren’t able to understand or empathize with a learner’s experience or cultivate a caring community the way a human teacher can.
So it bothers me when AI tutors and coaches are promoted as if they are equivalent to human teachers. For example, a promotional video for a new AI-based tutor from Microsoft says that it’s “like there are 20 extra teachers in one classroom.” And Khan Academy promotes its Khanmigo system as “a world-class tutor for anyone, anywhere.” Some people might shrug off these descriptions as marketing hype. But I worry that they contribute to a devaluation of the human dimensions of teaching. Human teachers are fundamentally different from AI tutors, and I think it’s important to recognize the special qualities of human teachers — while also recognizing what AI systems do particularly well.
Some AI researchers and EdTech companies try to avoid the direct comparison to human teachers by positioning their systems as AI companions or collaborators or copilots rather than AI tutors. But they still try to emphasize the humanness of their systems. I find it especially troubling when AI systems describe their own behavior as if they were human. For example, I recently saw a presentation about a new AI system designed to interact with young children. In its interactions, the AI system talked in a human-like way about its intentions and feelings. This felt problematic to me, since it could mislead young children to believe that AI systems have motivations and feelings similar to their own.
I don’t want to idealize the behavior of human teachers and human collaborators. Many teachers do not have experience or expertise in facilitating creative learning experiences — and many children do not have access to teachers who do. There is a role for AI-based systems to supplement human teachers (as discussed in the next section). But we should clearly recognize the limitations and constraints of these AI systems, and we should not be distracted from the important goal of helping more people become good teachers and facilitators.
More generally, we need to make sure that the current enthusiasm over AI systems does not lead to a reduction in interactions and collaborations with other people. The importance of the human connection and community became even more apparent during the pandemic, when different schools adopted different pedagogical approaches. Some schools implemented remote-learning routines focused on delivery of instruction based on the traditional curriculum; in those schools, many students felt increasingly isolated and disillusioned. Other schools focused more on social-emotional and community aspects of learning, emphasizing the importance of supporting and collaborating with one another; in those schools, students felt a stronger sense of connection, empathy, and engagement. As Harvard Graduate School of Education professor Jal Mehta wrote: “Classrooms that are thriving during the pandemic are the ones where teachers have built strong relationships and warm communities, whereas those that focus on compliance are really struggling.”
The pandemic highlighted the importance of empathy, connection, and community in teaching and learning. As the pandemic recedes, we should keep our focus on these very special human qualities.
Opportunities
EdTech companies and AI researchers are using new generative AI technologies like ChatGPT to produce a new generation of AI tutors, coaches, and companions that are subject to all of the problems described above — and some new problems too.
At the same time, I believe that generative AI technologies could also be used to support project-based, interest-driven creative learning experiences — but only if we make the explicit, intentional choice to use them in that way. An important goal, in my view, is to provide students with more control and choice in their use of new AI tools, so that they can decide how and when to use the tools as a resource as they are designing, experimenting, and problem solving. While working on a design project, for example, a student might ask an AI system: “Can you explain how arches work, and show me some creative uses of arches in architecture?”
Many educators have expressed concerns that students will enter homework assignments into ChatGPT and submit the resulting text as their own work, a new form of plagiarism that is more difficult for educators to detect. Such use of ChatGPT should certainly be discouraged: it’s not only unethical, but it robs the student of the learning experiences that come while engaged in the process of turning an initial idea into a final product. Instead, we should encourage learners to use ChatGPT and other generative AI tools not to produce the final result but as a resource throughout their own creative process.
Supporting the creative learning process
In our Lifelong Kindergarten research group at the MIT Media Lab, we think about the creative process in terms of a spiral in which learners imagine an idea, create something based on their idea, play and experiment with their creation, share it with others, reflect on their experiences and the feedback from others — all of which leads them to imagine new ideas, leading to the next iteration of the spiral.
There are opportunities for learners to use ChatGPT and other generative AI tools to spark ideas and provide feedback at various stages of this creative process. Here are a few examples:
- Just as writers and artists look at existing works to provide inspiration for new projects, some students are now using ChatGPT and other generative AI systems to create starting points and inspiration for their work. If they’re feeling stuck at the start of a project, they enter a few vague ideas, see how the system responds, and sometimes that helps them get started.
- I know a student who, while writing a paper, enters paragraphs into ChatGPT and asks for alternative phrasings. The student says that the ChatGPT suggestions often improve the flow of the paragraph and capture more accurately what the student was actually trying to convey. It’s a bit like using a thesaurus, but at a paragraph level. The use of ChatGPT also improves the flow of the student’s own creative process because the student can more easily follow through with their line of thinking without being distracted by trying to fine tune the wording of the paragraph.
- One student told me that he was unsure how to focus and frame a paper he was writing. So he described to ChatGPT all of the issues he was considering, and asked for advice. He said that the process of describing the issues was “an interesting reflective exercise for me,” and the responses from ChatGPT “helped me understand what type of paper I did or did not want to write, helping me clarify my own ideas — rather than influencing me with ‘its’ ideas.”
- Teachers and instructors are also using ChatGPT to support creative learning in their classes. I know someone who uses ChatGPT to refine discussion prompts for classes that she teaches, entering a draft prompt into ChatGPT to gain insights about the type of discussion it might generate. She treats her interactions with ChatGPT as a brainstorming session: posing different prompts to ChatGPT, seeing how it responds, and iteratively refining her prompts until she converges on a version that she feels could stimulate a rich conversation among the students.
To me, these uses of AI technologies feel very different from traditional AI tutoring systems. In each of the examples, the person is more in control of the process, engaging with ChatGPT only when they want to, and using ChatGPT’s responses as a catalyst for their own creative process, not as a replacement. It’s somewhat similar to the way that people, while working on a project, will use Google search or watch a YouTube video to get new ideas or information. Generative AI systems can serve as an additional resource, offering a different style of interaction and a more diverse range of results.
We shouldn’t expect (or want) AI systems to play the same role as human tutors or coaches or companions. Rather, we should consider AI systems as a new category of educational resource, with their own affordances and limitations. When people are looking for help or inspiration, they sometimes talk with a friend, sometimes refer to a book, sometimes do an online search. Each plays a different role. We can add AI systems to this mix.
Of course, there are still many issues to resolve. Generative AI systems must be improved to address the problem of biased and inaccurate results, and they must account for the fact that people with different learning styles and different backgrounds might require or desire different forms of interaction. Also, people will need to learn new skills for using generative AI systems effectively. They need to consider: What are the best strategies for writing prompts or questions for generative AI systems? And what are the best ways to ask follow-up questions to iteratively refine the responses from these systems? More broadly, people need to develop strategies for figuring out when AI systems can be helpful for them and when other resources are more useful.
Integrating AI into other software systems
Increasingly, developers are integrating generative AI technologies into other software systems, including word processors, spreadsheets, and photo editors. As they do so, I hope that they will design their systems to support project-based, interest-driving learning experiences.
Given my past work, I am especially interested in how AI technologies might be integrated into programming environments for young people. For example, Eric Rosenbaum of the Scratch Foundation has been experimenting with ways to integrate AI-based image-generation tools within Scratch. If a child wants an anime-style purple frog in their project, they can type in “anime-style purple frog” and see what the system produces. This certainly should not replace the Scratch paint editor or image library, but it can provide another option for creating images within a Scratch project.
Over time, there could be more fundamental changes to the ways people program computers. Instead of writing traditional computer code, or snapping together graphical coding blocks, what if people could generate programs by typing (or saying) things like “animate a bird flying across the sky with movements controlled by arrow keys”? Although we’ve invested a lot of effort in designing block-based programming environments like Scratch, we should be open to alternative paradigms for describing to computers what we want them to do.
But we should also be very careful in making these changes. Although it might seem very intuitive and natural to interact with a programming environment through high-level descriptions in a conversational interface, we should make sure that the system has other qualities that we value in introductory programming environments. Will it provide young people with the level of control they want for creating projects based on their own interests? Will young people be able to iteratively refine their projects as their ideas continue to evolve? Will young people still feel the joy that many experience when coding in Scratch? How difficult will it be for young people to remix one another’s projects, as they do in Scratch? Will the system be welcoming and engaging for learners across diverse socioeconomic and cultural backgrounds? Even if young people succeed at creating projects, what will they be learning in the process? In particular, will the activity of programming engage young people in reflecting on their own thinking, as happens with current programming environments?
Guiding principles
As we design and consider new uses for AI technologies, it would be useful to develop a set of guiding principles to check how well the new designs and new uses align with our educational values. For me, that means designing and using AI systems to engage young people from diverse backgrounds in creative, caring, collaborative learning experiences. So I started to put together a set of guiding principles for staying aligned with these values:
- support learners as they engage in design projects and navigate the creative learning spiral
- ensure that learners feel a sense of choice and control in the learning process, enabling them to develop their interests, their ideas, and their voices
- supplement and support (rather than replace) human interaction and collaboration
- provide opportunities for learners to iterate and refine their ideas and their creations
- take into account the different needs, interests, and aspirations of learners from diverse backgrounds, especially those from marginalized and vulnerable communities
As I was writing these principles, I realized they are roughly aligned with what I’ve called the four P’s of creative learning: Projects, Passion, Peers, and Play. That is, we need to support learners from diverse backgrounds as they work on projects, based on their passions, in collaboration with peers, in a playful spirit. I believe that these four P’s can continue to serve as useful guideposts as we consider how to integrate new AI technologies into our learning processes and educational systems.
Choices
There are many different ways that we can use new AI technologies to support learning and education.
Some uses of AI systems will constrain learner agency, focus on “close-ended” problems, or undervalue human connection and community. I worry that societal and market pressures will push the educational uses of AI systems in this direction.
But it is also possible to use AI systems to support a more project-based, interest-driven, human-centered, collaborative approach to learning, enabling learners to develop the motivation, creativity, and empathy that they will need to thrive in today’s complex, fast-changing world.
The choice is up to us. The choice is more educational and political than technological. What type of learning and education do we want for our children, our schools, and our society? All of us — as teachers, parents, school administrators, designers, developers, researchers, policy-makers — need to consider our values and visions for learning and education, and make choices that align with our values and visions. It’s up to us.
Acknowledgements
I would like to thank everyone who provided suggestions on earlier drafts of this blog post, helping me to refine my ideas through multiple iterations of the document. I am especially grateful to (alphabetically) Hal Abelson, Karen Brennan, Leo Burd, Kartik Chandra, Pattie Maes, Carmelo Presicce, Eric Rosenbaum, Natalie Rusk, and Brian Silverman — each of whom provided valuable feedback (though, of course, their contribution does not imply that they agree with the ideas presented in this post).