var _gaq = _gaq || ; _gaq.push(['_setAccount', 'UA-12659981-1']); _gaq.push(['_trackPageview']);
Originally posted to Chief Learning Officer Magazine, February 2011 Edition. Reprinted here with permission. (original link here)
By Dan Pontefract
The magnetic vortex known as the Four Levels of Evaluation, utilized pervasively and at times blindly within the learning industry, has run its course and is in need of an update, if not a replacement.
Since 1959, over 50 years ago, learning professionals en masse and across the planet have employed the Kirkpatrick Model as the sole basis of evaluation.
There have been challengers and pretenders, but the juggernaut framework known for the stages of Reaction, Learning, Behaviour and Results has remained steadfast through the years.
The problem? Like the cockroach at 350 million years old, the Four Levels of Evaluation model has stood the test of time without the obligation to evolve. Our burgeoning 2.0 society itself, however, has grown faster than Moore’s Law and others would ever have predicted.
Karie Willyerd, former CLO of Sun Microsystems and current CEO of Jambok puts things into perspective ever so succinctly.
“Think about how much the world has changed in the last 50 years,” she said. “We didn’t have color television in most homes; the Internet was years away from common use; and most companies were just building out their first corporate training functions. That’s when Kirkpatrick’s levels first came out, and they’ve survived longer than almost any other business in existence. We now have incredible statistical tools at our disposal and yet we haven’t used them on a widespread basis to measure learning effects or investment plans.”
With society wrapped in a cultural transformation expedited by technology, we continue to use an evaluation model that, quite simply, was built on the premise that learning occurs solely in a classroom.
Donald Kirkpatrick himself, interviewed in a November 2009 Chief Learning Officer article, said, “Top management, we call it the jury, is not going to approve budget unless you can prove that when people go back to the job they’re using what they learn, and that’s going to accomplish the results they look for.”
Notice the phrase “when people go back to the job”. Learning is a continuous, connected and collaborative process. It happens on the job, in the job, outside of the job and when not on the job so why on earth do we continue to evaluate our learners as if the only way competence exchange occurs is within the four physical walls of a classroom?
Charles Jennings, former CLO of Thomson Reuters and current partner with Internet Time Alliance and Duntroon Associates, further suggested, “The Kirkpatrick model is based on the assumption learning occurs through events. We know that learning is a continual process and that formal events only contribute a small percentage to the whole. Kirkpatrick and others have driven learning professionals down a blind alley trying to perfect the largely irrelevant.”
Learning is and forever will be, part formal, part informal and part social. Each is equally important, and thus the evaluation model must now incorporate all three legs of the learning stool.
With the employee’s increased productivity and competence in mind, rather than starting with Level 1 of Kirkpatrick’s Model – reaction – start first with an end goal to achieve overall return on performance and engagement (RPE). How you set up the target is immaterial; if you build the model to incorporate formal, informal and social learning metrics for a given interval – such as fiscal half-year – it solidifies the notion that learning is continuous, collaborative and connected tied to both engagement and performance. This can easily eliminate the myth that classroom learning events should be the sole source of evaluation.
Learning professionals would be well advised to build social learning metrics into the new RPE model through qualitative and quantitative measures addressing traits including total time duration on sites, accesses, contributions, network depth and breadth, ratings, rankings and other social community adjudication opportunities. Other informal and formal learning metrics can also be added to the model including a perpetual 360 degree, open feedback mechanism.
RPE, therefore, is an amalgamation of formal, informal and social learning evaluation; whether intentional or unintentional. The combination of these modalities is improving network connections, competence and behaviour, which correlates to improved engagement and performance. This is what learning professionals should be evaluating.
Diverging from the cockroach, it’s time for the learning profession to evolve.