the blog of dan pontefract | Time’s Up – Learning Will Forever Be Part Formal, Part Informal and Part Social
711
single,single-post,postid-711,single-format-standard,ajax_fade,page_not_loaded,,wpb-js-composer js-comp-ver-5.0,vc_responsive

Time’s Up – Learning Will Forever Be Part Formal, Part Informal and Part Social

Originally posted to Chief Learning Officer Magazine, February 2011 Edition. Reprinted here with permission. (original link here)

By Dan Pontefract

The magnetic vortex known as the Four Levels of Evaluation, utilized pervasively and at times blindly within the learning industry, has run its course and is in need of an update, if not a replacement.

Since 1959, over 50 years ago, learning professionals en masse and across the planet have employed the Kirkpatrick Model as the sole basis of evaluation.

There have been challengers and pretenders, but the juggernaut framework known for the stages of Reaction, Learning, Behaviour and Results has remained steadfast through the years.

The problem? Like the cockroach at 350 million years old, the Four Levels of Evaluation model has stood the test of time without the obligation to evolve. Our burgeoning 2.0 society itself, however, has grown faster than Moore’s Law and others would ever have predicted.

Karie Willyerd, former CLO of Sun Microsystems and current CEO of Jambok puts things into perspective ever so succinctly.

“Think about how much the world has changed in the last 50 years,” she said.  “We didn’t have color television in most homes; the Internet was years away from common use; and most companies were just building out their first corporate training functions.  That’s when Kirkpatrick’s levels first came out, and they’ve survived longer than almost any other business in existence.  We now have incredible statistical tools at our disposal and yet we haven’t used them on a widespread basis to measure learning effects or investment plans.”

With society wrapped in a cultural transformation expedited by technology, we continue to use an evaluation model that, quite simply, was built on the premise that learning occurs solely in a classroom.

Donald Kirkpatrick himself, interviewed in a November 2009 Chief Learning Officer article, said, “Top management, we call it the jury, is not going to approve budget unless you can prove that when people go back to the job they’re using what they learn, and that’s going to accomplish the results they look for.”

Notice the phrase “when people go back to the job”. Learning is a continuous, connected and collaborative process. It happens on the job, in the job, outside of the job and when not on the job so why on earth do we continue to evaluate our learners as if the only way competence exchange occurs is within the four physical walls of a classroom?

Charles Jennings, former CLO of Thomson Reuters and current partner with Internet Time Alliance and Duntroon Associates, further suggested, “The Kirkpatrick model is based on the assumption learning occurs through events. We know that learning is a continual process and that formal events only contribute a small percentage to the whole. Kirkpatrick and others have driven learning professionals down a blind alley trying to perfect the largely irrelevant.”

Learning is and forever will be, part formal, part informal and part social. Each is equally important, and thus the evaluation model must now incorporate all three legs of the learning stool.

With the employee’s increased productivity and competence in mind, rather than starting with Level 1 of Kirkpatrick’s Model – reaction – start first with an end goal to achieve overall return on performance and engagement (RPE). How you set up the target is immaterial; if you build the model to incorporate formal, informal and social learning metrics for a given interval – such as fiscal half-year – it solidifies the notion that learning is continuous, collaborative and connected tied to both engagement and performance. This can easily eliminate the myth that classroom learning events should be the sole source of evaluation.

Learning professionals would be well advised to build social learning metrics into the new RPE model through qualitative and quantitative measures addressing traits including total time duration on sites, accesses, contributions, network depth and breadth, ratings, rankings and other social community adjudication opportunities. Other informal and formal learning metrics can also be added to the model including a perpetual 360 degree, open feedback mechanism.

RPE, therefore, is an amalgamation of formal, informal and social learning evaluation; whether intentional or unintentional. The combination of these modalities is improving network connections, competence and behaviour, which correlates to improved engagement and performance. This is what learning professionals should be evaluating.

Diverging from the cockroach, it’s time for the learning profession to evolve.

13Comments

  • P Seibel / 7 February 2011 1:23

    This is something that I totally agree with, but my question is how do we create a new evaluation model with quantifiable metrics that execs can digest? I would imagine that evaluating the formal learning would still look a lot like the Kirkpatrick model, while the social aspect could be evaluated based on quantity of content in relation to how often said content is actually used in the workplace. Informal learning might be measurable in terms of the 360 degree feedback you mentioned, with specific emphasis on workers achieving specific objectives. What I’m not sure of, though, is how to measure all three elements together, rather than as three separate pieces.

  • Dan Pontefract / 7 February 2011 3:40

    @pseibel – stay tuned …

  • Judy / 7 February 2011 11:09

    Thank you for this post! I couldn’t agree more. i think L&D professionals promote and use the Kirkpatrick model because it is easier to measure “event” learning. From my experience, it is mostly level one data that is collected as time always seems to be an issue.

    I think measurement is the single biggest issue in L&D. What do you measure? Why are you measuring it? How will you use the data you have? To this end, doesn’t measurement need to take into account long term behaviour change – isn’t that what we really want to impact in L&D?

    How do we measure that? It’s more than bums on seats and a happy sheet at the end of the event!

  • David Bennett / 9 February 2011 5:56

    Hi Dan,

    Hi Dan,

    I’ll be very interested in your thoughts on a different model of evaluation. I’m currently working with a project to implement measurement & evaluation at an enterprise level. We don’t buy Kirkpatrick as scripture but the levels do provide quite good categories/headings to use as a base.

    We do however turn it on it’s head to get line-of-sight from business-, through performance- and learning objectives to satisfaction and adoption.

    I did want to ask a bit more about RPE “return on performance and engagement” Shouldn’t performance be the return we are interested in? If that IS the case then the RPE IS better performance would should start some kind of positive feedback loop and instantly create a global utopia? (forgive the exaggeration, jus’ tryin to lighten up my afternoon)

    Since that is unlikely to be the case, I must be missing something. Can you enlighten me?

    Cheers

  • Donald Clark / 13 February 2011 12:28

    Dan, while I think your article in CLO was quite interesting, I think it misses on a couple of points. I posted them on my blog – The Tools of Our Craft

  • Dave Ferguson / 4 March 2011 3:39

    Claude Lineberry said something to the effect that 89% of ISPI presentations cite Tom Gilbert, but only 14.5% of the citers have read him. I think something similar holds true for Kirkpatrick. I don’t consider the four levels as much of a model; I see them as an analogy, a good-enough way to guide trainers and their clients toward bigger-picture thinking.

    I believe an awful lot of formal training is, if not divorced from the job setting, clearly operating under a separation agreement. When trainers and their clients consider the elements in the four levels (like it? learned it? use it? got results?), there’s at least the potential to reconcile the two sides.

    Alas, my experience over many years suggests a sizeable portion of in-house trainers use the four levels much as they use the nine-dot puzzle, the Maslow pyramid, and that business about remembering 20% of what you read but 40% of what you eat (or however it goes): reflexively, almost ritualistically, as part the standard ingredients in a recipe for cognitive meatloaf.

  • David Glow / 7 March 2011 10:46

    @dan- will be including your link to this article in a tweet during my LS11 presentation to encourage folks to look beyond current eval models.

    @dave- “cognitive meatloaf”: Amen. But even worse- you know the chefs who follow a recipe generally can’t make the dish as good as the original until a LOT of experimentation.

    “how do we create a new evaluation model with quantifiable metrics that execs can digest”- erm… maybe get the execs involved and involve business partners to create a model together?

    There are a lot of metrics in business that aren’t a perfect science, but accepted to drive the business (sales and finance forecasts, etc…). Why can’t we bring the same type of thinking and cross-functional analysis to define some strategies in order to gather the data execs will buy into regarding the assets that produce value in a knowledge-based economy (not a fan of the term, but the term does recognize that today’s business key value drivers are the people, not the parts).

  • David Grebow / 8 March 2011 10:32

    Dan,

    I agree Kirkpatrick needs to be changed to reflect the changes in the workplace. I would also add that ADDIE is part of the problem as well.

  • Dan Pontefract / 13 March 2011 11:13

    @Judy – yes, I do believe that it’s the amalgamation of all interactions, consumption points, contributions, network breadth and depth as well as formal and informal assessment pieces that will make up the new ‘evaluation’ model.

    @David.Bennett – there have been several studies that link the level of personal and organizational engagement to increases in both performance and productivity. Hence, RPE is seeking to quantify a qualitative and quantitative loop. Working on it.

    @Donald – as always, thanks for both reinforcing and furthering the thesis

    @Dave.Ferguson – are you suggesting that trainers are merely using it because it’s always been there?

    @David.Glow – thanks for the nod, much appreciated. re: Getting Exec’s Involved … if you can, good on ya. Good luck with the preso.

    @David.Grebow – ADDIE, hmmmm. I personally view ADDIE more as a project management model. PMBOK and PMI is successful because it creates a system for Project Manager’s to follow. Sometimes it’s too rigid, but for many, it fits the bill. ADDIE can play a similar role for instructional designers. I don’t think formal learning is going away, but perhaps your point is that due to the link of ADDIE to formal learning, many simply disregard informal and social learning in the evaluation model? I’m thinking that’s what you’re implying.

    Was there a sale on David’s and this article? 😉

    Thanks all for poppping by and keeping the change dialogue going.

  • Jay Cross » How to evaluate social and informal learning / 21 February 2012 8:18

    […] Pontefract had a great post on TrainingWreck about the inadequacy of the Kirkpatrick model in a world where learning is […]

  • Connecting Learning, at work | openlearninghouse / 25 February 2012 3:10

    […] […]

  • Internet Time Blog : Working Smarter Top 20 Hot List / 10 February 2013 12:00

    […] Time’s Up – Learning Will Forever Be Part Formal, Part Informal and Part Social- Dan Pontefract, February 6, 2011 […]

  • Internet Time Blog : How to evaluate social and informal learning / 10 February 2013 12:08

    […] Pontefract had a great post on TrainingWreck about the inadequacy of the Kirkpatrick model in a world where learning is […]

Want to leave a comment? I'd love to hear from you. Cheers, dp.