Posted by Joan Phaup
Last week I mentioned a peer discussion at the upcoming Questionmark 2012 Users Conference about the relative merits of proctored and non-proctored assessments.
Today I’d like to share a conversation I had recently with Frank Loforte, who will be one of our case study presenters in New Orleans March 20 – 23. Frank works for Beckman Coulter, Inc., where he is involved in training and certifying the company’s technicians and customers in the use of the company’s biomedical testing products, During his presentation, he’ll be describing how his team is using the Net Promoter® Score in training evaluation surveys.
Tell me about your work at Beckman Coulter.
I do a little bit of everything, but my main job is principal technical trainer. I have been teaching service engineers how to use the instruments our company sells. Along with that I maintain the computers and the websites as well as running tests and surveys. That’s where Questionmark comes in: we use it primarily for our (Kirkpatrick Model) Level 1 and Level 2 assessments.
How are you using Questionmark?
We use it to monitor how the customers and service engineers who come through our training center respond to our courses. We track those surveys and evaluations to make sure they are pleased with the training. We also gauge our students’ levels of knowledge, application and analysis skills. We give a knowledge test, an application test and an analysis test, which we grade separately. Then we report those scores to their managers in the field. We also go back three months after a course to ask managers how well their students are performing, in order to get the supervisor’s point of view as well as the student’s point of view.
You’re going to be talking the Net Promoter Score at the Users Conference. Can you explain a little about that?
We have been using Net Promoter Score questions for about a year and half. This helps us track and quantify how many students are pleased with the training. Many companies use this kind of question for collecting people’s opinions about products. So when they sell a widget they ask customers if they would recommend the widget to someone else on a zero to 10 scale. People who respond with a score of 9 or a 10 are considered promoters. The 8’s and 7’s are called “passives” and those who rate something from 0 to 6 are called “detractors.”
The scoring is kind of complicated (and I’ll be explaining it during my session) but it gives you a really clear indication of how you are doing. Even more valuable is the follow-up question: “What is the most important reason for that score?” We look closely at the responses we get to that question from the detractors and respond to them right away.
What do you expect people to learn from your session?
I’d like them to be able to see what Net Promoter Score is, how we apply it how they can use it in their environments. We have a track record with it now, so I have a lot of data and it’s pretty consistent. It seems we are getting the same type of response from a varied public, so it’s a good indicator. It’s what you do with the answer to the second question that’s really going to be important: fixing the things they say are broken and continuing to do the things they like.
What are you looking forward to at the conference?
I want to learn about other people and what they do. That’s what’s really great: seeing people from all different companies using the same product and asking them questions. I also want to learn more about Questionmark Analytics and any new things that are coming along.
If you would like to learn more about the conference and register online, click here.
Posted by Joan Phaup