• Home
  • Test Fraud
  • New research suggests link between proctoring online exams and reduced test fraud
  • New res...

Most evidence that proctoring reduces cheating depends on student responding honestly to surveys. However, a recently published paper found that average results dropped during proctored exams, suggesting that the measure deterred or prevented cheating. It also found that in proctored exams, a student’s performance was more likely to be in keeping with their general academic record.

Questionmark founder, John Kleeman, explores this thought-provoking new evidence.

The 2020 paper by Dendir and Maxwell, “Cheating in online courses: Evidence from online proctoring” shows evidence that proctoring exams reduces student dishonesty. 

This paper is interesting. It’s useful to have some hard evidence about cheating taking place, and how security measures reduce test fraud. Most studies on the prevalence of cheating rely on test-taker surveys. Although these do give useful data, cheating in tests is a sensitive issue. Survey respondents may not be entirely truthful about their own or other’s behavior.

Dendir and Maxwell studied results of exams in online courses in economics and geography offered at Rutgers University in the United States. These covered a total of about 650 students. In each course, there were three exams, delivered online which gave substantial course credit. The questions were mostly multiple-choice (Questionmark software was not used). The courses were materially unchanged during the study period (2015 to 2019) except that in 2018 online proctoring was introduced. The online proctoring used a record and review style model, where students taking tests were videoed during the test and anomalies reviewed by instructors.

For those interested in the topic, the full paper is well worth a read. My interpretation is that the main discovery was that there was a significant drop in average scores following the introduction of online proctoring. A summary of the effect is in the chart below.

Test difficulty, course quality and student demographics and prior attainment levels were largely unchanged. The authors suggested that the most plausible hypothesis for the drop in average scores was that there had been cheating prior to the introduction of proctoring, and that this was reduced by the proctoring.

To add weight to this hypothesis, the researchers conducted further analysis. They compared how well exam scores correlated with student Grade Point Average (GPA) which showed how well students scored in other courses. Some students might do particularly well or badly on one course compared to others. But generally, if both validly measure ability, there should be a correlation between GPA and performance on the exams. This correlation would be less strong if there was widespread test fraud.

They found that the relationship with GPA was stronger with the proctored exams than with the unproctored exams. To quote the authors:

“GPA is a highly significant predictor of performance in both regimes. In unproctored exams, a unit increase in GPA raises exam score by half a letter grade (5 percentage points) on average. In proctored exams, however, GPA has a bigger impact – close to 8 percentage points, on average. This signifies that there is indeed a stronger relationship between ability and performance when proctoring is in place.”

This adds strong weight to the hypothesis that although the unproctored tests did have value, they also had a higher amount of test fraud. And the introduction of online proctoring reduced test fraud.

If people cheat at tests, there are significant negative consequences. The test-taker does is not motivated to learn and may not learn. The validity of the qualification is devalued for all test-takers and the credibility of the test sponsor or institution is reduced. While this might not apply in economics or geography, if test-takers cheat in other areas, then failing to learn could also create a safety or compliance risks.

Proctoring is just one of a series of measures that can be used to reduce test fraud, but it seems to have been useful in this case to improve the quality of test results. And it’s likely that this effect is applicable to other online courses.

John is the Founder of Questionmark. He wrote the first version of the Questionmark assessment software system and then founded Questionmark in 1988 to market, develop and support it. John has been heavily involved in assessment software development for over 30 years and has also participated in several standards initiatives.