Skip to content
learning data analysis

Learning data analysis: Tools to improve performance and compliance training

With L&D departments increasingly being challenged on what they are achieving and required to take a more evidence-based approach, learning professionals find themselves needing new tools for support with learning data analysis. The Learning Analytics Canvas provides just that support, across both performance and compliance training. Here’s how it works.

The Learning Analytics Canvas (LAC; see diagram) is a free planning tool and checklist created by Learning Pool’s data scientists that can be used by anybody embarking on a project or program that is going to involve learning data, regardless of their level of data maturity.

Picture1

It is based on the Business Model Canvas (Osterwalder, Pigneur et al, 2010), which will be familiar to many. Before we work through an example project with the LAC, which is really the easiest way to understand it, there are some core principles to explain.

6 Principles of the Learning Analytics Canvas

1.Be goal-driven in your use of data: all experts agree that defining goals at the outset is key to success.

2. Think from the start about the why, what, who, and when of the evaluation effort: why are you doing it, what are your KPIs, who is it for and what are the time constraints? 

3. Focus: Depending on the nature of the project, your data goals will probably focus on one or more of the following:       

  • Engagement
  • Knowledge (retention)
  • Behavior
  • Organizational change

These four goal areas will typically “staircase”. In other words, if you want to look at whether behavior change occurred, you might want to look back at how much of what was learned was retained and whether (or not) that impacted on behavior change; establishing a causal chain.

4. The LAC references three dimensions of data:

  1. Anonymous data
  2. Group data
  3. Individual data

In the real world, we don’t always have “perfect” data to work with. But depending on how your goals have been defined, anonymous data could be completely serviceable for what you want to do. Conversely, the data that you have could well impact the scope and scale of what you are able to achieve and thus your goals. It’s a feature of this type of planning that what is in the boxes interacts!

5. Look to assemble a portfolio of evidence rather than focusing on a ‘smoking gun’ – i.e. a single intervention that can be credited with achieving a performance improvement.

6. In the when box, we consider three different time stages or measurement points:

  1. Pre
  2. Post
  3. Post-post

You might learn different things about your learning intervention at each of these stages.

Putting the Learning Analytics Canvas to work

Anonymous data

Say as a starting point I wanted to get a quick look at whether or not my recently introduced performance support materials are more or less effective than what I had before (why). I might begin with some Google Analytics data or similar. 

This will likely be anonymous data, not tied to any particular individual or team; telling me about the volume of usage of the new materials, how long people engage with them and so on (who). In terms of time stage (when), I could easily compare pre-deployment with post-deployment of the new materials to draw an initial comparison—have our engagement figures changed? (engagement)—and continue to monitor changes in usage and dwell time as the months and years wear on (post-post). Note that I don’t need to know anything about the individuals concerned at this stage in order to make inferences about whether we have moved the needle or not on engagement, but already I can start to make decisions and take action based on what the data is telling me. 

Group data

The analysis would be better if I could have group data (who) identifying different teams or populations, through cohort analysis. And for more of a gold-standard analysis, I might set up a control group of people who did not receive the new materials but stuck with the old stuff. Widening the lens, we could look at how the data breaks down between different functional, geographic, or customer groups, for instance, enabling a different scale of decision-making.

Individualized data

Then again, let’s say I was able to get individualized data tied to particular individuals about whom the organization will hold other data, some of which I may have access to for analytical purposes. This will allow me to go deeper. Group analysis becomes easier at this point: team, function, and role data is likely to be available if we know who an individual is. 

It is when we work with individual data that things really open up, and a useful addition to your tech toolbox at this point in moving you along a path to more sophisticated use of learning data would be xAPI and a suitable learning record store. xAPI statements can be used not only to describe and analyze but can also be the trigger for actions within a system. 

Retention

So far we have only talked about performance support at the engagement stage. Moving on a bit in our ambitions, we might begin to look at knowledge (retention) and think about that box of our Canvas. 

Even in the context of performance support, there might well be an instance where I wanted to know whether the just-in-time resources I provided are having any lasting effect. But in the case of compliance training, it could be critical to know whether we are bringing about an improvement in retention. To give a practical example, if tested at a certain point after the first engagement, how well can the learner carry out a particular procedure in accordance with regulatory requirements? 

We are venturing into the territory of learning transfer here, but the point about this approach is that you don’t necessarily have to invoke the whole machinery of a four-step or seven-step or 15-step evaluation process if what you want to find out is something quite specific.

We might just need to find out whether our performance or compliance training materials are not only engaging but also impactful—in the limited sense of being memorable. In terms of the measurement stage, comparing pre-data and post is of course useful (when). As far as post data goes, there are L&D departments that use a 12-12-12 model for retention—i.e. test 12 hours after the intervention, then after 12 days and then 12 months. We can do this at group or individual level, and even anonymized survey data could be useful at this point because fundamentally the point of our inquiry is the intervention itself: the content, the resource, the experience.

Behavior

When we move on to the next stage, however, behavior, that locus changes. And this is where we start to require additional sources of data, where just knowing whether someone accessed a particular piece of elearning, for instance, or scored well on an assessment, won’t tell us much. Self-assessment can be useful in seeing whether behavior change has occurred and gets even more useful if there is some sort of triangulation, as given by 360-degree surveys, for instance, or comparing a salesperson’s self-assessment of their confidence levels against how much they sold over time; comparing “hard” measures such as, in the case of a programming team, quantity of “code commits” to code quality, against how supervisors and peers rate each other. This is where we begin to talk about assembling a portfolio of evidence. Where certainty might be difficult to achieve, given the many factors we mentioned earlier that tend to bedevil learning analytics at these later and most important stages, we can still assemble a portfolio of evidence that says, on the balance of probability, it seems like this intervention has had this effect because of this evidence.

Organizational change

If our goal is to look at organizational change (what), the locus of inquiry shifts again, from individual behaviors to group behaviors, aggregated into how multiple sets of behaviors in multiple parts of the organization work to change the behavior and culture of the organization. 

This could be evidenced by a big number of sales, or retention figures, or statistics on diversity, or whichever Key Performance Indicator is most relevant. The LAC model encourages users to get to this measure quickly to highlight the KPI that would most readily demonstrate the why as the second thing they do when completing the Canvas. This helps keep a “true north” for the exercise, saying: “This is where we are aiming.”

The difficulties of achieving certainty are compounded when looking at the entire organization, so this is where we really do need to take a portfolio-of-evidence approach and where that approach comes into its own. 

Conclusion

From the very simple beginnings we have described here, with modest ambitions, running through to a complex, multidimensional organization-wide analysis effort, the structure of our LAC still manages to contain the important points we need to think about. The Canvas is broad enough to contain whatever type of picture you want to paint of your aspirations for learning data. We hope you find it useful as you continue on your data analytics journey!

You can view more of our learning analytics examples via our case studies page. Or download our new eBook, ‘Adding data and learning analytics to your organization’ to find out more about good analytics practice.

Got a learning problem to solve?

Get in touch to discover how we can help

CTA background