I’ve just read a thought provoking article on diagnostic tests written by Simon Bates and Ross Galloway from the University of Edinburgh Physics Education Research Group and published by the UK Physical Sciences Centre (see the article at pages 10-20 here).
The authors are particularly concerned with diagnostic tests that measure conceptual understanding and identify mis-conceptions. So rather than testing for facts or knowledge or particular skills, their interest in diagnostic assessments is primarily around whether students understand some key concepts in the Physical Sciences. If students don’t understand them, they as instructors need to correct this in their teaching and feedback.
The article gives examples of use of diagnostic tests and also gives some good and detailed guidance on how to construct them, including which statistics to look at for good results. They recommend (as proposed by other authors in the Physics Education Research literature) a p-value or difficulty index of 0.3 to 0.9, a discrimination index of 0.3 or better or a point biserial correlation of 0.2 or better, and a reliability index of 0.7 or better.
They also explain how to write questions that test why people don’t understand something as well as what they don’t understand. And they give the example below (from the Lawson Classroom Test of Scientific Thinking) as something they have used in their teaching. Here is a what-why question, which asks for a fact and also asks why that fact is the case.
Bates and Galloway report that the first, “what” part of the question is answered just as well by students coming into university as those who have completed their first year at university, but that there is significantly better performance in the “why” part by those who’ve been at university for a year.
Getting to the root of learner misconceptions is a key challenge for all of us in learning and assessment, and I recommend this article as a good read.