Measuring learning transfer after training
If you work in L&D, it can be easy to see the launch of a training initiative as the finish line, completions are rolling in, quizzes have been passed, but really, this should only be seen as the end of the first lap. Passing an end-of-course assessment may indicate that your audience has understood the material and can remember it at that moment, but truly determining learning transfer requires a more long-term strategy.
Enter the Kirkpatrick Evaluation Model, a tried and tested framework that organisations can use to measure the impact of a training program. It consists of four key stages, Reaction > Learning > Behaviour > Results. While these don’t strictly need to be considered in a linear way, the four elements logically build on each other and can be used as a loose structure to plan out your measurement efforts. In this way, measuring learning transfer can be seen as a long distance relay, rather than a sprint finish. Let’s take a closer look.
Leg 1: Reaction – Setting the Pace
0-2 weeks post course completion 
The reaction phase is intended to gauge whether learners found the training generally enjoyable, but more importantly useful and relevant to them. While this isn’t enough to tell you if knowledge has been acquired or behaviours will change, a favourable reaction indicates that participants at least found value in the material and were paying attention.
This can be tracked with learner satisfaction and engagement surveys taken shortly after course completion. Aim for a mix of qualitative and quantitative questions. You could also consider Net Promoter Scores or pre and post course confidence scores. Ideally, you’ll use things like focus groups or pilot phases before launching the full training programme to incorporate feedback as you develop. 
Leg 2: Learning and retention – Gathering Speed
0-12+ weeks post course completion
This is all about measuring whether or not the knowledge or skills you intended to teach have been acquired through your training intervention. Pre and post course assessments can be a good starting point, but this often only tests short term recall, easily skewed by the recency effect and the fact that simple quiz questions are often easy to guess. 
To gauge whether participants have really understood the material, assessment activities should involve a real element of application rather than basic multiple choice questions. Think detailed scenarios, simulative exercises or role plays. Timing is crucial too; it’s no good learning something if you’ve forgotten it within a week. Consider using a spaced practice campaign to facilitate and track retention, using a mix of reinforcement quizzes and bite-size content refreshers such as infographics, videos or micro-modules. 
Lap 3: Behaviour – Clearing Hurdles
1-6+ months post course completion
This stage of evaluation is perhaps one of the biggest tests for learning transfer. Can learners make the leap from theory to practice? Demonstrating skills or knowledge in a safe and controlled training setting doesn’t guarantee follow through to the workplace, where employees are faced with hurdles like time pressure, competing priorities and real emotions. 
Spaced practice campaigns should be used not just for ensuring knowledge retention, but to encourage application, for instance, by setting real world practical tasks or challenges and providing opportunities to reflect on what was and wasn’t successful in practice. A scaffolded approach to spaced practice is very beneficial, moving from knowledge retention activities to application activities and increasing in complexity as your learners gain confidence.
Methods of measurement will vary depending on the behaviour, but could include things like self reporting, manager observations, system or tool usage data, audit results and peer or customer feedback. Studies show that it takes an average of 66 days for new behaviours to become habitual and confident, but this can be much longer depending on the individual, environment and complexity of the behaviour and how often someone needs to demonstrate it. However you choose to track it, make sure you have a long term strategy, as behaviours may change immediately post training then start to taper off. 
Lap 4: Results – Crossing the Finish Line
3-12+ months post course completion
The final stage is not really about measuring learning transfer, but whether changes in behaviours led to the organisational outcomes you hoped for. This is where individual performance adds up to team success, measurable, reportable, and strategic. Find out how we saved a leading supermarket brand £2.5 million, while improving retention and speed to competency. 
While this is the last step in the Kirkpatrick evaluation model, this should actually be the first thing you consider when designing training. Identify the key business metrics you’re targeting, then work backwards to define the behaviours you think will drive that change and therefore what the training should incorporate.
Be patient with this stage, and make sure you leave enough time before drawing conclusions, as organisational results can often lag behind behaviour change. For instance: 
- sales behaviours and conversion rates might improve quickly, but overall revenue may take much longer to reflect this due to long sales cycles or project timelines
- customer satisfaction rates might improve, but retention rates may only show change after renewal periods
- reports of compliance breaches might actually increase in the short term, as employees have a better understanding of compliance issues and how to report them
However in some cases, as with a race, simply crossing the finish line doesn’t mean you got the result that you wanted or expected. This doesn’t necessarily mean your training programme was a failure, but you may need to re-evaluate your assumptions. Were the behaviours you targeted the most influential over the KPIs you wanted to improve? Are there external factors like market conditions, resource shortages or technical/cultural barriers at play? This is a great opportunity to ask what else needs to change.
Don’t just inform, transform
The Kirkpatrick model reminds us that learning doesn’t end when the course does,  it’s a relay from reaction to results, where each stage passes the baton to the next. Measuring satisfaction and knowledge is a start, but true value comes from tracking how learning translates into real-world behaviour and meaningful business outcomes. 
At Learning Pool we strive to design learning experiences that don’t just inform, they transform. Get in touch to find out how Learning Pool can help you design, develop and measure training that really works.
 
            
             
            
            