Skip to content

How Can We Use xAPI To Personalise a Learning Experience?

We often get asked, can we use xAPI data to personalise a learning experience? And then, the people really pushing their luck ask, could we use Machine Learning to improve the results over time? Well, the best way to demonstrate this is to show one we built previously.

Originally commissioned as part of a project for City & Guilds, tutors use the Performance Optimiser dashboard to personalise feedback to students. One element in particular, the Momentum Quadrant, is shown above. The quadrant is used to plot learners in a benchmarked graph against each other, so that an informed tutor can make inferences about a learners status within a learning experience.

At the time of building we didn’t extend this feature with any Machine Learning techniques; interventions made were entirely at the behest of the tutor.

In this blog we’ll explore how we could have extended the opportunity to become smarter over time and improve both our benchmarking and our feedback mechanisms, using Machine Learning techniques.

On the X axis we see Engagement. This is a measure of how ‘engaged’ the learner is with the experience. Engagement is obviously a fairly generic term, but in this case we take it to mean how often a learner logs in, views content, takes part in conversations / user generated content exercises and generally performs actions that are not strictly necessary to progress – voting up another users responses, for example.

Each time the learner performs one of these actions it generates an xAPI statement and the Performance Optimiser grabs this data from the LRS and places into a learners ‘bucket’, counting up how many statements that learner has in relation to engagement.

On the Y axis we have Progress. This is a more concrete measure of how far through the learning experience the learner has made it; here there is a finite measure. We know how much content and how many exercises there are to complete and so we use a journey to calculate progression.

With these two measures in mind we can start thinking about plotting learners position, relative to the rest of a cohort, on the momentum quadrant. To do this we first calculate the mean position of a hypothetical learner within the cohort. Then for each learner in turn we calculate how far away from the mean they currently are.

For this we use a Standard Deviation calculation and make a few assumptions to begin with (these could be improved by Machine Learning in time; first we have to put down a start). First we say that any learner who appears on the graph at 0.5 SD below the mean or greater should be rendered in ‘Red’.

These folks are struggling. We then say that any learner 0.49 SD below the mean, up to the mean, and up to 0.49 above the mean, should be represented as ‘Amber’. And finally, anyone 0.5 SD above the mean should be ‘Green’; these learners appear to be doing fine.

You end up with something like the graph now shown above, where the numbers represent the individual learners. 1-3 are struggling, 4-6 are mid-range and 7+ are doing well. Obviously I’ve skewed this example for simplicity; it wouldn’t work out quite like this and, given our base calculation, a lot of learners would centre around the mean. You’d have to work quite hard (or not, as the case may be) to move yourself into Red / Green status.

At the moment our Momentum Quadrant requires a human tutor to come in, interpret the graph and take appropriate action. But we can use our xAPI eco-system to trigger activity automatically if we want to be really proactive about the situation.

The overlay above shows a simple 9-space rubric that we could use to trigger different actions or activities, based on a learners position in the quadrant at a point in time – maybe a weekly basis.

So, for example, those learners who fell into the very bottom left square need some personal attention. There seems little point messaging the learner directly; they don’t engage with the experience, so why would they engage with an email? A message to the Tutor is more appropriate here; they need to reach out to provide some form of pastoral support.

For those learners who fail to engage but make good progress we have a more interesting decision – do we mind? After all, if they are progressing, what more could we ask?

Well, we can always push for a little more and, in the case of the top left square, we could reach out to the learner and ask if they might mentor someone less capable than themselves. This could be of material benefit to those in the bottom right, who appear keen but are failing to progress. A one-to-one mentoring relationship could help them.

Of course all of this is based on assumptions of what people might benefit from, which is a dangerous game. To take this to the next level and become more predictive, we need to create a feedback mechanism that accounts for the intervention and any resulting movement in momentum.

Again, we can do this with xAPI as we track activity, but we now need to play in some further Machine Learning techniques to improve the system….

Could Machine Learning improve our personalisation results?

Our system thus far is a closed-one; we pre-program some assumptions into a model and the model never changes. But of course our initial assumptions about progression and engagement were just that, assumptions.

We can potentially refine this model with the addition of further real-world data of how learners interact with our system and our feedback mechanism. Machine Learning potentially offers us this possibility.

When people start asking about Machine Learning (ML), they sometimes have stars in their eyes, feeling like this is the magic black box that will provide all their answers. But ML can take you every bit as much off course in refining your analysis as analysis by humans using spreadsheets.

It is easy to be overconfident and to assume the model won’t be as fraught with error because your now trusting a computer. It remains essential to know the overarching goals and metrics in order to collect the right data and select an optimal ML approach.

In the case of learner momentum, we’re dealing with a question of classification; which intervention is likely to be most appropriate? There are a lot of approaches for this depending on the nature and the scale of the data set. For example, Decision Trees are a common approach for classification in ML.

Take a look at the below example, taken from Wikipedia, for how a Decision Tree could help us to decide if we want to play outside:

Decision Trees can give highly accurate results for the data set they were trained with, but are more likely to give woefully inaccurate predictions for new data; this is known as overfitting.

To solve the overfitting issue, multiple independent decision trees can be considered together in a method known as Random Forests.

The result is a highly scalable approach which generates easily interpretable results which are not subject to distortion by outliers. But Decision Trees can also be heavily context dependant; the above example makes perfect sense if you live in a very warm climate – you would never go play outside if it was high humidity and sunny.

…but if you were from the UK, that’s pretty much the only time you’d go play outside!

As such, we might also consider another Machine Learning approach for our particular problem, Neural Networks (a term which actually covers a variety of related approaches).

Neural Networks’  strength is in their ability to detect complex relationships in the data, particularly in cases where the boundaries between different classifications in the data are non-linear.

The power of Neural Networks comes at a cost as they require a large set of training data, take considerable tuning to get optimal results, and can also be prone to overfitting.

While Learning Pool (formerly HT2 Labs) has primarily focused on Neural Network’s capabilities for classification in our work in text analytics, the ability to deal with non-linear data, makes it a powerful choice for predictive analysis such as the sort we are thinking about here.

At the end of the day both the data selection and the methods applied will impact the success of a ML project, but research has shown that getting the right data has a greater impact than the selection of a specific method; there is often more than one effective approach for a given class of problem.

As is the case for any analysis, the data is the key to success or failure.

Proper planning will allow you to collect the right data, with the right level of quality to give meaningful results. Quality, in this sense, is determined by a number of different elements, such as having a standardised format which can help reduce the number of gaps, duplicates, omissions and errors in the dataset.

The dataset must also be representative to transfer across circumstances, for example to train our momentum models on data from a single demographic would negatively impact the accuracy of the model.

This brings us right around back to xAPI; adopting a standard format like xAPI to collect validated data across a range of activity providers can help to solve many of the issues mentioned above.

We actually covered this topic quite heavily in our latest free OLX: ‘Demystifying Personalised Learning’.

In the below video I explain these concepts for our OLX participants; which you may also find useful.

If you are considering using data to personalise a learning experience, you should seriously consider adopting a standardised approach to collecting your learning data.

Read more about xAPI with our handy free guide and then get started with Learning Pool LRS, our Open Source Learning Record Store, for free.

Got a learning problem to solve?

Get in touch to discover how we can help

CTA background