Posted by John Kleeman, Founder

What can you do to make tests fairer for people whose first language is different to that of the test?

Last week Gavin Cooney, CEO of Learnosity/Questionmark and I gave the closing keynote at the Beyond Multiple Choice conference. Our title was “Closing the Assessment Excellence Gap”. We spoke about how assessment is critical for digital learning and the importance of inclusivity and accessibility.

If you want to see a recording of our talk, you can see it HERE (you can skip directly to our session at 3:55:35). One of the issues we touched on was inclusivity challenges for people who take a test that is not in their native language, and I thought this would be a good topic to raise here.

What proportion of people are not native speakers?

According to the OECD (The Organisation for Economic Co-operation and Development) in their “Skills on the Move” report, around 12% of adults on average are non-native speakers in a mix of developed countries.

In the US, based on analysis of US census data for 2018 from the Center for Immigration Studies, it is suggested that over 67 million US residents speak a language other than English at home, which makes an amazing 21.9% of US residents. Although Spanish predominates as a language in the US, there are more than a million Americans who speak Arabic, Chinese, French, Korean, Tagalog and Vietnamese at home.

The reality is that with increasing migration and globalization, in most countries of the world, there will be a material minority of test takers who do not speak the country’s main language as their native language.

How does taking a test in a second language impact on your score?

Clearly if someone is taking a test in a second language, there is a risk of scoring lower for reasons that may not be directly relevant to the test construct.

The OECD (see report above) suggests that people whose native language is different to that of the country they live in do score on average significantly less well in literacy, numeracy and problem-solving tests. This aligns with lower labor market outcomes for such individuals, i.e. they tend to get less well-paying jobs.

However, a significant part of the difference is having to take a test in a language that isn’t their own. This varies depending on what languages are involved – it is easier for example for a Spanish speaker who moves to Italy to take a test in Italian than for a Spanish speaker who moves to Finland to take a test in Finnish (as Italian and Spanish are similar languages).

But on average in literacy tests, the OECD reports that about half the difference in test scores relates to the language of the test – so if you give foreign born people tests in their native language, the score difference is much smaller.

This feels like a significant inclusivity issue within assessment.

Whether your tests are in certification, corporate compliance or learning or in education, there is a real issue of how to make tests fair for test-takers whose native language is different to that of the test language. Particularly if language fluency is not relevant to the skill being measured by the test.

 This issue is only going to get more serious as globalization increases.

What can we do to be inclusive to people whose native language is not that of the test?

One approach is to provide the test in more than one language. It is more work to create and maintain a test in more than one language, but this may be the fairest approach. Many Questionmark users translate Questionmark assessments for precisely this reason. You can either translate within the software as shown in the screenshot below, or else export to an external format to allow translation by professional translators.

What can we do to improve the experience if we cannot translate the test?

Sometimes it is not economical to translate the test into another language. And sometimes we need to provide a test in one language that is accessible to non-native speakers. What approaches are sensible?

A common approach seems to give more time to non-native speakers. Although it’s hard to make this psychometrically rigorous, it feels that it might be fair.

Another way of thinking about this is to use great care in the language of the assessment to make it more accessible for non-native language speakers. Good practices here also tend to improve the test for all test-takers. The UK exams regulator, Ofqual, published some draft guidelines earlier this month on making tests more accessible and inclusive. The guidelines are well worth reading, but some key points are:

  1. Use simple language, unless the construct being assessed requires more complex language.
  2. People can get confused about what they are supposed to do. Use clear and unambiguous instructions on how to complete the assessment.
  3. Use common and straightforward language (e.g. “with” rather than “in conjunction with”).
  4. Avoid colloquialisms, idioms, metaphors and sarcasm.
  5. If you use an abbreviation, give the expanded form the first time it is used.
  6. Be consistent in use of language, don’t use one term in one place and another term in a different place (e.g. “text” in one place and “source” in another).
  7. Unless needed, avoid negative language (e.g. use of “not”)
  8. Avoid long sentences with complex grammar. Divide into two or more simpler sentences.

As organizations become more global and more multi-cultural, there are no easy answers on making tests suitable for everyone, no matter what their first language.

But I hope this article helps raise awareness of the issue. If you want to see a recording of Gavin and my keynote presentation at Beyond Multiple Choice, you can access it HERE (you can skip directly to our session at 3:55:35).

John is the Founder of Questionmark. He wrote the first version of the Questionmark assessment software system and then founded Questionmark in 1988 to market, develop and support it. John has been heavily involved in assessment software development for over 30 years and has also participated in several standards initiatives.