One of my favourite roles is as a Tutor for CIPD professional programmes. I specialise in Learning and Development, and the other day I was running a workshop for 16 HR professionals on Evaluating Training Interventions.
Early in the day I asked people to raise their hands if they evaluated their training programmes. 16 hands go up. Next I ask how many do more than get participants to complete a feedback form or ‘happy sheet’ at the end of the session. Only three hands remain in the air.
My guess is that this is representative of evaluation practice in organisations throughout the UK. They are keen to find out if participants enjoyed the session, if it met their learning objectives, if the trainer was any good and if the room was up to scratch. Yet little is done to assess whether the programme had any longer term sustainable results on behaviour or profits.
Now, there may be many good reasons for this, including time, budget and the challenges of linking changes of behaviour or increases in profit directly to a training intervention.
So whilst it’s often beneficial, if sometimes tricky, for organisations to evaluate the medium to long term results of their development programmes, I wonder how useful it is to place so much emphasis on short term impact through the completion/analysis of evaluation forms . Do we need these forms at all? And if we do, we might want to think more carefully about how we use them.
Most forms are heavily weighted towards the performance of the trainer, and often the trainer’s future work depends on the results. So we already have a biased system, as we tend to get what we evaluate. And remember, the trainer is running the show.
Here’s how it works with an evaluation form that I saw recently:
‘Please rate the following on a scale of 1-5 – the trainer:’
‘Made the module aims and outcomes clear’. Trainer response: make sure to state the learning outcomes at the beginning of the session, and repeat occasionally throughout the session – even if not necessary.
‘Appeared well prepared’. Trainer response: look prepared and blame any errors in slides/handouts or glitches in organisation on an ‘admin office’ that they will contact to sort out all errors.
‘Structured the session appropriately’. Trainer response: I wonder what ‘appropriately’ means.
The trainer is working hard to get good scores, and if by chance this leads to a change in behaviour or increase in productivity for the participants – well that’s great too.
Surely we’ve got our priorities wrong here. How about we shift our focus away from measuring the ‘performance’ of the trainer, and onto measuring the performance of the participants – after all, isn’t that what training programmes are for?