Big data and adaptive learning. Huge topics each in their own right, but combined they provide us with some exciting possibilities for the future of learning. We have alluded to all this in our article on AI in education, but there's a lot more digging to be done in such an important area.
First off, so that we are all on the same page, let's define adaptive learning (AL) as a tool or approach which "adapts to students’ proficiency levels with each interaction. Students don’t have to complete a formal assessment or diagnostic to get the instruction and practice they need — it’s provided just-in-time as students work to complete assignments" That's how Knewton puts it, and they are a company at the forefront. For those of us still getting familiar with it all, however, let's break down what AL actually does.
As a student works through the course, everything is measured by the AL tool or platform, and we mean everything. The time they take on one type of exercise versus another, the tools they use externally to solve problems, the bits they skip, and the bits they explore more deeply, the way they answer questions or approach tasks. All of this feeds back into a huge data pool that makes connections, detects patterns, and re-informs the algorithm so that future learning episodes are tailored to what the student might need most, and to how they prefer learning best.
That is where big data comes in. To build a model of student behavior like this, which not only reacts to you one step at a time but builds a picture and predicts future behaviors; that takes a lot of data.
Sounds good? Well quite possibly, but then, perhaps we have to ask ourselves a fundamental question: if we use big data to inform adaptive learning, what happens when we start changing ideas on what learning looks like to us? Is there a place for adaptive learning in a learner-directed model of education? We will come to that big question in part two of this article, but let's look first at how AL shows up in institutions.
You might already have guessed, but adaptive learning tools are not cheap. If we consider, however, that for courses such as an online business degree, AL can considerably cut down on the cost of real live humans to monitor learning progress and free them up for more strategic roles such as mentoring, rather than the constant correction of tasks.
These systems can be for whole institutions or even individual teachers, and off-the-shelf or hybrid. The hybrid models can be adjusted and adapted every semester to continuously improve. This is important, as a quick search of reviews for universities using adaptive learning, and you'll see hugely differing results. Some are loved, and some are...less loved shall we say. It can't be easy bringing AL to an institution, but there are good examples out there for us to learn from.
Brainquake produced an AL learning tool for mathematics, which reports great results in independent classroom studies. The tool is excellent at keeping students in their zone of proximal development: that is, "knowing" just when to push the complexity forward a bit, not too much that the lesson will be confusing, but enough to be new and motivating for the learner.
As a hybrid model, educators can customize tools like Brainquake. Whereas off-the-shelf tools might see an answer as "right" or "wrong", AL can be customized to recognize the part in the calculation or formula where things went off track and suggest the next learning opportunity that might address that.
This is just not something that can be done easily in a classroom or lecture theatre with 30-200 learners. This tool could be used by individual learners at home with data pooled collectively for evolution or implemented by individual teachers or institutions.
Big. That's the short answer. This is one of the main reasons why the uptake of AL is so slow by institutions; things get pretty complex when we are talking about human behavior, and machines can't do it all. Learning is not something you can just throw technology at.
Some things are more straightforward. A timestamp feature detects that a student is taking quite a long time to answer a question, and the platform offers supporting materials, prompts easier versions, or breaks it up into more manageable components. Trackers measure how long a student stays on task before watching "world's cutest puppies" videos for a welcome distraction, make assumptions about attention span, and offer more granular interventions next time. It might even offer a puppy meme as a reward for completion (that idea is copyright of NEO Academy). But what does a score on a test actually mean? What are the deeper conclusions about approaches to learning, learning behaviors, and task engagement that underpin the more simplistic data? That is when things get big. The algorithms which are used to build a learner profile of traits are hugely complex; not least because they will also feed into necessary UX improvements to make the learner experience better, and different iterations of content that the teaching staff have to figure out and produce. That gets fed in, and the data collection has to adapt to the new variables and, almost, start again. Our heads hurt just thinking about that. Perhaps it's time for those puppy videos. We'll see you in the next paragraph.
O.k. we're back. This is another big question in how AL actually works. Learning is not linear, and the way we measure it is anything but easy. AL can work really well with procedural knowledge in fields like engineering. Research has shown that AL used well in this area was "functionally acceptable and capable of representing an expert".
In other forms of learning, however, AL has work to do. Metacognitive learning, where learners are supported to become aware of what strategies are successful for them, is still complex even for the rich data modeling from the most sophisticated offerings out there. Learning to learn is something AL does every second, but recognizing and supporting that process in humans is not quite possible at present. Measuring "knowledge" and supporting its acquisition is something AL does great at. Measuring skills and competencies, less so, and measuring awareness of the learning process and effective strategies, there is much work to do.
If we are honest, it is tempting to conclude that AL has managed to be the perfect tool for the wrong age of education. If we were happy with the production line model where students memorize and repeat, declare knowledge without exploring how they actually acquired it, then AL would be the panacea and educators could shift to more targeted interventions, coaching and mentoring, researching and responding to the shifting data from the AL platform.
However, that is a lazy conclusion for one reason, in one word: adaptive. AL is just getting started, and there is huge potential for it to be as central to learning environments of the future as the blackboard and chalk were to the learning environments of yesterday. Check back in next week as we explore just what might lie ahead for AL in the future of education.