Skip to content
Man looking at screen of Learning Analytics

Analytics or “analytics”: It’s not like tomato, tom-ah-to.

Analytics. It has become the go-to (and let’s face it, often overused) word for compliance practitioners who seek to evolve their programs in a meaningful way by leveraging data.

Compliance training is a powerful source of this type of transformational data for teams to leverage, as it is one of the most substantive interactions the organization has with every employee. But that data often falls short. The challenge with much of the data that comes out of compliance training is that it’s not truly insightful…despite many vendors best efforts, the “data” that comes out of most training platforms is completion and quiz data. This just is not deep enough insight for most teams.

Good behavioral data is key in the evolution of compliance programs. The DOJ provides a roadmap and descriptive guidance of what this looks like. Within their guidance, the term “effective” or “effectiveness” comes up 54 times throughout this document. 54 times. It’s no surprise that in speaking with clients and peers alike, this question of measuring program effectiveness is where many compliance teams are on a continual journey.

And within the training and communications program element specifically, this is especially critical now that the DOJ has inserted a new question: “Has the company evaluated the extent to which the training has an impact on employee behavior or operations?” So, how do you know if training has an impact?

While many companies tout their “analytics” capabilities, there remains confusion amongst compliance practitioners around what the differences are…and the differences are vast. This blog post breaks down a few of those key differences so that your team can separate Analytics from analytics.

Behavioral insight, on a per learner level

In order to extract the best data, you have to deploy the best training. When delivering it in a web-based format, the best training is adaptive and dynamically changes throughout the course based on the level of proficiency each employee shows. Courses should be designed for maximum situational simulation, guiding learners through scenarios that may arise during the course of their jobs. In other words, training should speak to the learner and not at them, be realistic in the context of your company, and be interactive, not just for the learner, but for the analytics that kind of training produces.

As learners navigate the course, the difficulty level goes up or down depending on what they answer correctly or incorrectly. Learners who do not answer a scenario correctly are given immediate coaching and feedback, along with an equivalent alternate scenario where they must demonstrate proficiency before moving on.

Because of this, Learner A may see completely different scenarios as Learner B – but both are moving through the course at their own pace and both are being coached up the proficiency curve by demonstrating knowledge all the way through. For a seasoned employee, their knowledge on specific concepts and risk areas can yield a training experience that’s up to 50% faster.

It’s important to note that this type of technology is completely independent of “branching” technology, which creates different pathways for different sets of learners based on demographic data (i.e., managers vs. individual contributors, employees in Italy vs. Mexico, and so on). Branching is an important, useful technology, but it is not a dynamic learning experience.

The flow chart below shows an example of a sample course journey:

Garbage in, garbage out

Your analytics are only as good as the data they pull from. Many vendors that refer to “training analytics” are actually talking about quiz question data.

This means that all learners are getting all the same questions, rather than scenarios based on what they know and what they need to know. Although the journey may begin with role based or self-identifying questions, from there it becomes one-size-fits all set up as content, followed by a topically aligned question, followed by answers – say A-D. If the learner guesses ‘A’ and it’s incorrect, there is no equivalent alternate presented. Rather, they are presented with the same question again, and this time choose ‘B’, because they now know it’s not ‘A’. It’s process of elimination. Learners can almost brute force their way through.

While this type of training data is better than no data at all, it comes with limitations. Here’s a breakdown of the differences:

Quiz Question “Analytics” Adaptive Behavioral Insight
What it is
  • An output of quiz questions
  • All learners within the designate learning track receive the same questions, irrespective of knowledge level
  • A proprietary platform that shows each learners decision making journey in simulation
  • Learners receive different scenarios based on knowledge level, which can move up and down through the course of the training
What it shows The average number of attempts it took before answering the question correctly. Behavioral data based on the choices the employee made within the given scenario and alternate equivalents
Bias
  • High – written to be theoretical assessments, not application-oriented decisions
  • There is no equivalent in the event learner chooses incorrectly
  • Low – written as simulated scenarios that require application of a compliance risk topic
Statistical Validity 
  • Low – Answers do not change (i.e., if the learner selects “a” and it is incorrect, they select “b” the next attempt out of process of elimination
  • High – the user is given an alternate, equivalent, scenario and must demonstrate proficiency on the subject before moving on
  • Simulation analysis is a demonstrated method in assessing risk
Discoverability  High – brute force. Can’t demonstrate learning Low – coaching up the proficiency curve demonstrates the learning journey to mastery
Segmentation and Benchmarking Low – no real point of comparison or meaningful insight that contributes to benchmarking High – ability to compare segments across organization and industry to determine behavior risk hotspots and areas that need remediation. Built to be a powerful tool for assessing performance year over year.
Actionability Low – no true analytics that allow teams to target risk areas or trouble segments High – situational simulation presents actionable insights compliance can use to target remediation and guidance

 

As you can see, the differences are significant. While there are many visually strong training styles in the market, including extensive video content, it is not adaptive. The lack of adaptivity prevents in-course remediation and seat time savings. And while the quiz data does filter through to the LMS, when the client sees knowledge deficiencies, those deficiencies haven’t been remediated in-course and are now potentially discoverable.

Ultimately, all this goes back to the question of effectiveness. If you’re looking for jazzy training that’s fun for your employees and counts completions, okay. But if you’re in the market for training and looking to leverage meaningful training data to meaningfully impact your program, then ask questions about bias, statistical validity, and actionability, and other items that are important for you as the champion of your company’s ethics and compliance program.

Got a learning problem to solve?

Get in touch to discover how we can help

CTA background