Employee performance goals; Choosing the right training evaluation model

30 April 2021 by Ben Betts

Since the four-step Kirkpatrick/Katzell model of learning evaluation was first introduced some sixty years ago there have been numerous revisions and new versions, each of which takes it in a slightly different direction. So which is the best model for evaluating against employee performance goals?   

 

The four-step model known in popular shorthand simply as ‘Kirkpatrick’ has come in for heavy criticism in some quarters. But research tells us it is still the evaluation methodology most widely used by learning professionals. Kirkpatrick is very far from being the only game in town, however.

 

Almost since its inception the model has been modified, extended, revised, overhauled and tinkered around with by successive generations of learning theorists and practitioners. 

 

Jack J. Phillips proposed adding a fifth level to the model, return on investment (ROI).

 

Roger Kaufman argued that ROI could be considered part of level 4, but (with others) made his own suggestions for additions to the model including a fifth level based on societal contribution and splitting level 1 into “input” and “process.”

 

Other refinements and additions to the model made over the years include: 

  • The CIRO (Context, Input, Reaction, Outcome) approach: Warr, Bird, and Rackham (1970)
  • Hamblin (1974)
  • Brinkerhoff (1987)
  • Bushnell, (1990)
  • Sleezer, et al. (1992)
  • Fitz-enz (1994)
  • Bernthal (1995)
  • The Indiana University approach: Molenda, Pershing, and Reigheluth (1996)
  • The KPMT model: Kearns and Miller (1997)
  • The Carousel of Development: Industrial Society (2000) 

 

And that’s just the ones that stick reasonably closely to Kirkpatrick’s framework. There are others I won’t list here that move further away from it.

 

Most recently the learning-transfer evaluation model (LTEM), created by Will Thalheimer, has gained considerable traction with practitioners, and Brinkerhoff has become increasingly popular.

The result of this is that learning professionals now have a choice of models to deploy when evaluating learning. And if that choice seems initially a little baffling, don’t despair. There is a clear principle that can guide you in your choice of which Kirkpatrick variant to use. 

 

It’s all down to your goals – what you are trying to achieve by evaluating. Evaluation projects don’t all have the same aims. Similarly, many of these Kirkpatrick variants shift the focus of instrumentality in ways that make them more suited to particular aims. So think about what you are trying to do, and choose the model that best helps you get there. 

 

Here are three examples of how this might work in practice.

 

1. Prove ROI

 

If your main driver in evaluating is financial – you want to prove that your learning intervention saved the organization money, say, or led to a financially quantifiable improvement in productivity – then Jack J. Phillips is probably your guy.      

 

Phillips adds a fifth level to Kirkpatrick, ROI, and gives a methodology for calculating the financial impact of programs. Bear in mind, however, that he himself said this method should be used selectively, as it is time-consuming and expensive.      

 

2. Improve training

 

Brinkerhoff’s Success Case Method (SCM) focuses on outliers; those who performed or scored well and those for whom the training failed, and uses qualitative methods to drill down into the reasons for this success or failure. 

 

Although it can be turned to many uses both on its own or in combination with other methodologies, SCM has to be your go-to model to improve the quality and effectiveness of the learning you provide.

 

3. Prove learning transfer    

LTEM is strongly focused on learning transfer and has eight ‘tiers’ (as opposed to ‘levels’):

  1. Attendance
  2. Activity
  3. Learner perceptions
  4. Knowledge
  5. Decision- making competence
  6. Task competence
  7. Transfer
  8. Effects of transfer

 

In the LTEM model it is not until tier 5, ‘decision-making competence’, that “adequate” metrics kick in, and then only if learners can demonstrate this competence after a sufficient period of time has elapsed after the training intervention (long-term remembering is one of the critical concepts of learning for Thalheimer). Impelled by Thalheimer’s critique of Kirkpatrick, and determined to keep “a laser-like focus on transfer,” LTEM focuses evaluation on establishing a “causal pathway starting at learning and moving to job performance and then organizational results.” The key steps within this pathway are:

  • Decision- making competence
  • Task competence
  • Full transfer

Full transfer is when the subject can successfully put what they’ve learned into practice within their working lives without the need for significant help or prodding.

Full transfer is not, of course, the end of the story. That comes at level 8, with proof of transfer to the working situation – and that training has actually resulted in better performance according to measures that mean something to the business.

The Phillips ROI method might tell you that your learning program had a financial impact on the organization, but if your goal is really to measure against employee performance goals, then the casual pathway established by the LTEM model is really the one for you.

 

Conclusions

The examples given here are admittedly broad-brush, but serve to demonstrate that learning professionals have a choice of evaluation models to use, and needn’t stick slavishly to Kirkpatrick, which after 60-plus years in use has perhaps reached the end of its useful life.

For a more in-depth exploration of how to measure employee performance goals, and many more useful tools and insights download our new eBook, ‘Adding data and learning analytics to your organization’

 

Ben Betts
Chief Executive Officer

Ben serves as CEO for Learning Pool LTD, with responsibility for the commercial, product and people functions based mostly in the UK, reporting to the Group CEO.

Previously, Ben served as Chief Product Officer for Learning Pool where he worked to help define and develop Learning Pool’s next generation of workplace digital learning platforms, with a focus on Learning Experience Platforms and the Learning Analytics space.

Before Learning Pool, Ben helped to build HT2 Labs from humble beginnings into a globally recognized innovator in workplace digital learning.  Learning Pool completed an acquisition of HT2 Labs in June 2019.

Ben’s expertise is based in research, having previously completed his PhD researching the impact of gamification on adult social learning, Ben has authored and contributed chapters for many books, has two peer-reviewed academic papers and has presented at conferences around the world, including TEDx.

View more from Ben Betts
Read more about Learning Pool
Visit our Learn and Connect section

Get a free demo

Get in touch to find out how we can help

Start your learning journey

Get started by telling us what you need and one of our team will be in touch very soon.