GUEST COLUMN | by Tim Hudson
The world of ed-tech is moving rapidly. As new learning software is created, the word “adaptive” is increasingly being used in claims describing how technologies personalize and individualize learning for each student. As the Curriculum Director for DreamBox Learning, I oversee development of math lessons that are built on an adaptive learning platform. Developers like DreamBox have built these “recommendation engines” with the worthy goals of (1) ensuring the success of every student, (2) enabling each student to learn at her own pace and level of achievable challenge and (3) supporting teachers whose daily challenge is differentiating for an entire classroom of students at different points on their learning paths.
We know too much about human learning to embrace adaptive platforms that ignore pedagogy.
A primary way in which any adaptive platform supports teachers is by making recommendations for students in real time. Teachers are stretched thin and work tirelessly for their students. It’s asking too much of teachers to require they analyze individual student data and make assignments to every student on a daily or weekly basis. Not only is this continuous data analysis cycle challenging and time-consuming, but it also requires deep curriculum expertise in order to preserve what the authors of the Common Core refer to as “consistent progressions” and “coherent connections” as students learn and develop.
Despite these noble goals and strategic support, technologies claiming to be adaptive have rightly come under scrutiny from educators who understand curriculum and pedagogy and are justifiably skeptical. In August, Audrey Watters described “any company touting ‘adaptive learning’ software” as being influenced heavily – if not entirely – by the behaviorist B.F. Skinner. The Skinner excerpt she references in her post accurately describes the approach used by many adaptive platform developers. I believe it also describes how many educators expect “adaptive” software would be designed. Skinner’s ideas do not match the research about how humans develop cognitively. So Watters is right to be wary.
In October, Dan Meyer posted two criticisms of “adaptive” technologies. In the first, he drew comparisons to Stanley Erlwanger’s research on the failures of Individually Prescribed Instruction (IPI). Meyer appropriately lamented the poor feedback given to students along with the fact that antiquated prescriptive approaches always fail to develop mathematical intuition and appreciation for the beauty of the subject. In his second post, Meyer referenced quotes about two developers that built adaptive engines around analysis of behavioral data while disregarding the quality of students’ learning experiences. Meyer noted, “I don’t have a lot of hope for a system that sees learning largely as a function of time or time of day, rather than as a function of good instruction and rich tasks.”
Watters and Meyer are just two of many legitimate skeptics of adaptive learning software. Their concerns are valid and must be considered when examining whether new tools are truly helping students learn. Even though “adaptive learning” developers have noble goals, the design of each adaptive platform reveals important pedagogical approaches and assumptions made by the developers. The adaptive platform determines the pedagogy and the way students engage with learning. Not all adaptive platforms are capable of supporting strong pedagogy and rich learning tasks.
Broadly speaking, there are two ways to build an adaptive learning platform. The first is an approach similar to how Netflix makes entertainment recommendations. It begins with existing, static content such as textbook passages, online lectures, and quiz or standardized assessment items. Usually, this content is very narrow in scope in order to isolate specific skills and diagnose errors in using those skills. Next, this content is arranged into a sequence (or “learning map”) based on a best guess of how students should encounter the skills. When students begin lessons, their progress data, behaviors (such as click-rates or login times) and assessment scores are subjected to “learning analytics” to establish a learner profile and position on the lesson map. That profile is used to recommend lectures, choose lesson sequences and report usage insights such as suggested study and practice times. Additional analytics are then applied so that crowd-sourced user behaviors inform and adjust the learning map and sequencing for future students.
While this “behavioral profile” platform design is effective for making entertainment recommendations, it has several weaknesses and limitations when applied directly to learning. First, it replicates many of the mistakes of IPI, most notably the flawed assumption that “learning comes about by the accretion of little bits” (Shepard, 1989). Second, such a platform is completely dependent on a pedagogical model where the teacher (or system) “delivers” content and students become “receivers” of information. The lessons and instruction are static, and students therefore never engage in authentic, independent thinking. Such a platform may collect mountains of data, but they are not data about students’ understanding and cognitive development; they are data about behaviors and the ability to replicate procedures on shallow assessment items. Third, the “adaptivity” for students not making progress is essentially recommending that they passively receive the same or similar static content again. It seems that for online lectures, the strategy of “pause and rewind” has become the 21st century equivalent of a teacher speaking “slower and louder.”
DreamBox took a different approach and built an intelligent adaptive learning engine. The intelligent difference lies in the pedagogy. Wiggins and McTighe say it best in Schooling by Design: “An understanding is a learner realization about the power of an idea… Understandings cannot be given; they have to be engineered so that learners see for themselves the power of an idea for making sense of things” (p. 113). At DreamBox, our starting assumption is that students are brilliant. Their critical thinking skills are underestimated if we think understanding can be given through content delivery.
Therefore, when creating lessons on the DreamBox platform, we design digital tools that empower students to have their own realizations. Our lessons adapt in real-time because they aren’t static content; they were built to be interactive and adapt at any moment to any child. Our non-linear sequencing is informed by decades of research about children’s natural development and growth in mathematical reasoning. Our lesson progressions are not crowd-sourced using other students’ preferences and behaviors. Our lessons are written by experienced teachers who decipher students’ thought processes based on their interactions with our digital tools. DreamBox teachers write lessons that respond to different strategies in specific ways, ideally just as a teacher would in person.
Students using DreamBox choose and create their own strategies, and they get feedback specific to their ideas. A simple example is the act of counting by groups. Young children naturally count by ones, but counting by groups is not an immediately intuitive idea. The pedagogical approach of other adaptive platforms would involve a teacher explaining, “You can also count by 2’s, 5’s, or 10’s. Here’s how.” In this content delivery model, the student is receiving someone else’s idea and strategy. As one of my math professors would say, “she’s getting answers to questions she’s never asked.” The platform and pedagogy of DreamBox empower students to develop ideas and skills in a better way.
Adaptive learning software is a powerful partner with teachers and schools to ensure student success. We know too much about human learning to embrace adaptive platforms that ignore pedagogy. Sound pedagogy drives DreamBox’s intelligent, adaptive platform.
Tim Hudson, Ph.D., is Director of Curriculum Design at DreamBox Learning. Prior to joining the company in 2011, he served as the Curriculum Coordinator of K-12 Mathematics for the Parkway School District in St. Louis, MO, facilitating the development of the K-12 math curriculum; the development of district-wide math benchmark assessments; the creation of a math intervention program; and the selection of math-based technology tools and textbook materials. He is a longtime member of the National Council of Teachers in Mathematics (NCTM). Write to: firstname.lastname@example.org
Wiggins, G. & McTighe, J. (2007). Schooling by design: Mission, action, and achievement. Alexandria, VA: Association for Supervision and CurriculumDevelopment.
Shepard, L. A. (1989, April). Why we need better assessments. Educational Leadership, 46(7), 4–9.