A new study found that talent development practitioners believe performance-based assessments are more effective at measuring employee learning outcomes than knowledge tests.
Most organizations use some type of assessment, whether performance or knowledge-based, to gauge how much employees are learning from training programs. Ninety percent of 318 talent development professionals surveyed by the Association for Talent Development (ATD) said they include some kind of measure in their training courses.
“Both types of assessments have their place, but it is important to make sure you are using them for the right reasons,” said Rocki Basel, director of research at ATD.
Many organizations rely on both types of assessments for foundational skills and compliance training, she said.
About 74% of organizations use performance assessments, which measure performance of a task after completing a training course. “The most common types of performance assessments are-on-the-job observation and scenario-based assessments,” Basel said.
Three-fourths (76%) of the users of performance-based assessments said this was an effective way to measure learning outcomes.
Knowledge tests are also common, with 88% of organizations saying they use them. “They are helpful for checking if a worker can recall certain information or is able to interpret something they were presented,” Basel said. “But only 58% of talent development professionals think they effectively measure learning. There’s a mismatch there.”
One of the main reasons for the disconnect, according to Ken Phillips, a learning expert and founder and CEO of Phillips Associates, a consulting firm in Grayslake, Ill., is that “writing valid, scientifically sound test questions is an art and a science.”
Phillips said that test-question design is not typically what HR has experience with, so business leaders are not sure if the data collected is as valid as performance-assessment data.
In addition, “all the test questions I have seen developed by internal corporate L&D [learning and development] professionals are mainly used for recall of information—interesting results maybe—but not that valuable,” he said.
“We should be selecting the tools that are most appropriate to evaluate what we are doing with our training,” said Alaina Szlachta, founder and lead learning architect of By Design Development Solutions in Austin, Texas.
“So, if the purpose of the training is to simply assure that people demonstrate certain knowledge, then we should give them tests to demonstrate knowledge retention,” she said. “But if the training is meant to grow people’s performance or change their behavior, then we should use performance assessments. I’m very happy to see that on-the-job observation is most used for this because it is a phenomenal tool to see if people do what we expect them to do.”
Using knowledge tests to evaluate performance change will not yield effective results, experts agreed. And if you don’t have the time to construct valid test questions, “go and buy one from a company that you trust, which has invested in validity and reliability testing,” Szlachta said.
Phillips added that “if you don’t validate your test items, then you are not sure that the data you’re collecting is useful, and in fact, it could be misleading.”
Especially if you are making decisions around the data you’re collecting, it’s imperative to make sure you are creating valid and reliable test questions, he said.
Pretests or Post-Tests?
Pretesting, testing during training, and post-testing are all valuable ways to measure knowledge and learning levels and assess whether learners achieved the desired objectives.
According to the ATD, just over 50% of employers are using pretraining tests to provide baseline data to see if employees learned something specifically from the training, 75% are using tests during the training to help trainers know if they can move forward or must review something to reinforce training, and 80% are using post-training tests to see if workers achieved the desired learning objectives.
One-third of employers (33%) said they use all three tests, and 11% said they only use post-training tests. Organizations typically rely on single-select, multiple-choice, and true/false questions.
Szlachta said she was disappointed to hear that some employers are conducting post-tests without first doing pretests. “If we have post-data but no pre-data to compare it to, we don’t have any valuable results,” she said. “If we’re tracking change, we’ve got to have the pre-data to see if training helped move the needle in some way.”
She recommends six different evaluations, including one pretest, two tests given during the training and three tests given after training.
“I like to conduct multiple post-tests—one right after the training, followed by tests given days and weeks later, to capture learning growth,” she said.
Phillips said that post-tests can be effectively used as reinforcement after the training.
“HR should collaborate with managers and try to get their support to encourage people to take the post-tests,” he said. “Let people know you are not singling them out for not doing well, but instead looking at aggregated data.”
An organization run by AI is not a futuristic concept. Such technology is already a part of many workplaces and will continue to shape the labor market and HR. Here's how employers and employees can successfully manage generative AI and other AI-powered systems.