BARB recruits 12,000 people across 5,100 households representative of the overall UK viewing public. If the UK has a population of 66 million, that makes 12,000 people only 0.018% of the population. This means those viewing figures are very vague estimates.
With online platforms like Youtube, the creators know exactly how many people have viewed their work. They can see who viewed it, where they viewed it from, how many times, how much of it they viewed, whether they came back for more, social features let them see how many people liked it, and cookies and ad profiling probably means that they know the age and gender of their audience too. xAPI brings a similar experience to learning design.
In our last blog post (Are you experienced? An insight into Learning Experience Platforms) we talked about how we can tailor content to learners’ individual needs. To do that, we need to be able to track learners at a very granular level. The new technology of LxPs and LRSs will let us do that and will shape the new landscape of learning design.
The horizon used to be a brick wall, many learning designers never see how learners interact with their content. Tech companies often talk about failing fast to ensure rapid improvements. You can’t fail fast if you don’t know that you’re failing. We can’t trust a bog-standard happy sheet to assess the merits of a piece of learning, and now we don’t have to. We can measure so much without the unreliability of learner questionnaires and without the time and negotiation necessary to do user testing. xAPI enabled LxPs allow you to see exactly which question learners consistently get wrong, which search terms they’re looking for, what time they’re engaging with their learning, and so much more. This will bring about a whole new age of assessing learning effectiveness. We’re going to discover fascinating and useful statistics that we don’t even know about yet! All this information will let us change the landscape of learning design as we go along – the ground is fertile and pliant, we can constantly remould it to our learners’ changing needs.
As a quick example, we recently noticed that learners at an organisation using Learning Pool were consistently failing the same question. Which seemed odd – it wasn’t a particularly difficult question. When we opened the course to have a look at it we realised that a quick grammatical tweak would remove any ambiguity from the question. After that, learners performed significantly better on that question. Without the data coming out of the course, we would never have spotted that problem and quickly solved it. At a macro level, this could enable us to confidently make huge changes – perhaps we’ll find that one of our learning content templates is significantly less effective than others, or that learners never finish watching videos over 5 minutes long.