Posted by Austin Fossey
We are back after another successful Questionmark Users Conference! This was the fifth of these conferences I have been to, and I think it was one of my favorites. Sure, you can’t go wrong with putting a bunch of like-minded confreres in the heart of wine country, but what I liked most were the stories I heard from our customers about their assessment programs.
Regardless of their individual roles or organizations’ goals, Questionmark users are first and foremost measurement professionals. I was impressed by our customers’ commitment to look for ways to constantly improve their measurements, validity, and impact for stakeholders.
Questionmark is after all just a tool, and like any tool, the quality of the work that is produced is dependent on the skill of the craftsperson. It was encouraging to hear about how our customers were using Questionmark along with other tools and research to iteratively improve their work.
For example, one client, knowing that Cronbach’s Alpha reported by Questionmark is a theoretical lower bound of the assessment’s true reliability, shared that they were comparing other reliability coefficients that were appropriate for their homogenous set of assessment scores. Using an alternative coefficient that was appropriate for their data scenario, they were able to defend the assertion that their assessments were probably more reliable than indicated by Cronbach’s Alpha.
Another client, having set a valid cut score using a modified Angoff method, found that their program now had issues of face validity with their performance standard. Despite the evidence that the cut score was set in a fair and valid fashion, their stakeholders maintained that the standard was set too high—an issue of face validity. Rather than rebuilding the entire assessment, the client discussed strategies for building a stronger validity argument to support the cut score through a replication study using a modified Angoff method or another standard-setting method with an independent group of subject matter experts.
These were just a few of the examples of Questionmark users working to build the highest quality assessments that they can. These customers were thinking critically about their assessments, working iteratively with their test developers, exploring their data, and incorporating feedback from their stakeholders. These are hallmarks of good assessment, and I am excited to know that these measurement professionals are using Questionmark as one of their tools.
Posted by Austin Fossey