Not evaluating training? No more excuses!

“How do you evaluate your courses?” I always ask this question when it’s time to discuss evaluation with participants in an instructional design course. Well, let’s be honest. ADDIE does end with an E for Evaluation for a reason. And then I’m not even talking about evaluating the design process, because that’s often a bridge too far.

From what I usually hear, about half of the participants send out a survey after instructor-led sessions. And that’s it. Some don’t evaluate at all while only a small number of participants get to level two or three of Kirkpatrick’s evaluation model. Elearning is often not evaluated at all. It looks like a common problem with only 27% of businesses having a framework to measure the effectiveness of their training, according to Brendon Hall research (November 2022).

Let’s look into the main reasons to evaluate training before discussing common excuses (and countering them).

Why evaluation is key

There are several reasons why evaluation should be standard practice:
1. Be able to improve
You need a continuous feedback loop to be able to improve the learning resources and find out what works for your learners. It’ll help you to ensure that the course content is relevant and meets the learners’ needs. For example, you’d like to find out what their key takeaways were, why they didn’t complete an activity or spent so much time on a certain slide.
2. Measure the effectiveness of a course
Only if you start measuring the outcomes of training against the SMART learning objectives you defined at the start of the project, you know whether you’ve met your (organisational and course) goals.
3. Show your value
If we can show the stakeholders how training is improving employee performance, it’ll prove the impact and value of our work. For example, tasks take less time, employees make less mistakes per hour or the attrition rate decreased. You might be able to determine the return on investment (ROI) or return on expectations (ROE). Measuring and showing the effectiveness of training will also make it easier to get funding for future training.

Common excuses

When talking with other instructional designers, I often hear three reasons for not evaluating training. The main reason is a lack of time. By the time the course is implemented, the focus has already shifted to another project. Or the client doesn’t want to pay the instructional designer for any evaluation efforts. “We can do that ourselves.” The second reason is that the business has sent out surveys in the past, but the response is so low that they don’t think it’s worth continuing the effort. Thirdly, the organisation doesn’t have any data to measure the impact of the courses.

Countering excuses

Let’s counter those excuses to dismiss training evaluation:
No time
You need to plan for the evaluation from the start. See it as an integral part of your instructional design process. When you’re defining the objectives, think about how you’re going to evaluate those objectives. What data do you need and how are you going to gather the data. Also allow time to look at the data you’re gathering so you can actually report on the effectiveness and improve.
Low response
Ask yourself the question: “What questions did I ask and what did I do with the survey results?”. Did you analyse the results and made improvements to that or the following courses? Or did they end up on the bottom of your drawer? When learners understand the importance of filling out a survey and that their feedback is valued, you’ll get a higher response rate. Also, the type of questions will show to learners that you’re interested in their opinion. Let’s not ask about the quality of their lunch.
No data
When you’re writing SMART objectives, you’ll also need to think about how you’re going to measure the outcomes. What data can you extract from your LMS? What other business data can you use, e.g. attrition rates, cost savings, number of complaints, Net Promotor Scores, time to complete a task etc. For more detailed responses that you can’t collect from existing sources, you might need a survey.

Convinced?

I hope you’ll give evaluation a go because it’ll definitely improve your work. In the next blog I’ll talk about Kirkpatrick’s evaluation model. It’s an easy and well-known evaluation model to start with. In the third blog of this evaluation series, I’ll discuss evaluating the instructional design process. In the meantime, have a look at your survey questions 😊.
(Visited 25 times, 1 visits today)