In my last post I talked about using confidence intervals and how they can be used successfully in assessment reporting contexts. Reporting design and development has always been interesting to me. It started when I worked for the high-stakes provincial testing program in my home province of Alberta, Canada.
When I did my graduate degree with Dr. Bruno Zumbo he introduced me to a new world of exciting data visualization approaches including the pioneering functional data analysis work of Professor Jim Ramsay. Professor Ramsay developed a fantastic free program called TESTGRAF that performs non-parametric item response modeling and differential item functioning. I have used TESTGRAF many times over my career to analyze assessment data.
The work of both these experts has guided me through all my work in report design. In working on exciting new reports to meet the needs of Questionmark customers, I’m mindful of what I have learned from them and from others who have influenced me over the years. In this season of giving, I’d like to share some ideas that might be helpful to you and your organization.
I greatly admire the work of Edward Tufte, whose books provide great food for thought on data analysis visualization in numerous contexts. My favourite of these is The Visual Display of Quantitative Information, which offers creative ways to display many variables together in succinct ways. I have spent many a Canadian winter night curled up with that gift, so I know it is a great gift idea for that someone special this holiday season!
The Standards for Educational and Psychological Testing contains a section highlighting the commitments we have as assessment professionals in terms of appropriate, fair, and valid reporting of information to multiple levels of stakeholder…including the most important stakeholder, the test taker! In the section on “Test Administration, Scoring, and Reporting” you will find a number of important standards around reporting that are worth checking out.
A colleague of mine, Stefanie Moerbeek at EXIN Exams, introduced me to a number of great papers written by Dr. Gavin Brown and Dr. John Hattie around the validity of score reports. Dr. Hattie did a session at NCME in 2009 entitled Visibly Learning from Reports: The Validity of Score Reports, in which he listed some recommended principles of reporting to maximize the valid interpretations of reports:
1. Readers of Reports need a guarantee of safe passage
2. Readers of Reports need a guarantee of destination recovery
3. Maximize interpretations and minimize the use of numbers
4. The answer is never more than 7 plus or minus two 5
5. Each report needs to have a major theme
6. Anchor the tool in the task domain
7. Report should minimize scrolling, be uncluttered, and maximize the “seen’ over the ‘read’
8. A Report should be designed to address specific questions
9. A Report should provide justification of the test for the specific applied purpose and for the utility of the test in the applied setting
10. A Report should be timely to the decisions being made (formative, diagnostic, summative and ascriptive)
11. Those receiving Reports need information about the meaning and constraints of any report
12. Reports need to be conceived as actions not as screens to print.
You can read a paper Hattie wrote on this subject in the Online Educational Research Journal; Questionmark’s white paper on Assessments through the Learning Process offers helpful general information about reporting on assessment results.