• Home
  • Recruitment
  • What is the return on investment for new selection methods?
  • What is...

Posted by Paula Baciu MA MBPsS, Assessment Content Manager

How do you know that your investment in selection methods/assessment is worth the money?

In this blog post, I want to challenge one of your most deep-seated beliefs: that humans are better at candidate selection than technology. We already know that 49% of employees rely on their gut to take business-related decisions, so it comes to no surprise that 60.4% of recruiters rely on “their own methods” for selecting new hires. These often involve unstructured interviews, resume screening, and job tasks – but just how valid are they?

How to measure the effectiveness of recruitment tools

The purpose of selection methods is to provide a suitable environment to identify the best performing candidate(s) for the role in question. You can measure the effectiveness of your selection methods by tracking the training performance and the job performance of new hires. Training performance is the speed at which a new hire learns the job and achieves an optimal productivity level. A universal measure of both training and job performance is revenue: the return on the investment (ROI) the employer made to hire the worker and pay him a monthly salary.

Attain 40%+ more economic productivity by hiring the best performers

If a worker brings profits to the firm (or saves money), then the investment in recruiting and supporting him will pay off. Therefore, the ROI relative to the wage paid to the employee: the dollar value of his work output. The standard deviation between individuals’ job performance is at a minimum of 40% of the mean salary of the job.

Let’s say we were looking at a group of candidates for an engineering role advertised at a salary of $40,000. And we divide them into three categories based on their performance: 25% are low performers, 50% are average performers, 25% are high performers. In this situation, the standard deviation between different levels of job performance is  40%*$40,000 = $16,000. Therefore, one low performer would generate roughly $16,000 less for the company than an average performer. Compared to a high performer, he will generate $32,000 less per year.

If you were to have 100 such vacancies every year, failing to select the highest performing candidates could amplify the cost of wrong hires to $3,200,000 per year. Such a loss could very well determine the success or failure of that company in a few years.

Top that up with 9% more each time you improve the validity of your selection methods

When reviewing the ROI of selection methods, you should also consider the competition per vacancy. If you’re advertising 100 job openings and you get 100 applicants, investing in selection methods is unproductive because your organization can hardly afford to reject applicants. However, if you tend to have 100 applicants per opening – in other words, a rejection rate of 99% – then the risk of hiring a low or average performer increases, therefore the returns you get on investment into selection methods will be much more in your favor.

Some formulas allow you to calculate how much revenue you generate for the organization when you improve the validity of selection methods. On average, improving validity by just one per cent increases the dollar output per hire by 9% of their wage or $18,000/year for a medium complexity job. If we’re talking about hiring executives, that number is much higher.

The most scientifically valid selection methods

Researchers Schmidt, Oh, and Shaffer (2016), whom I’ve been citing so far, reviewed all the academic literature available that investigated selection methods. They calculated the validity of several selection methods in predicting job performance and compiled a digestible leaderboard for us mortals.

General mental ability (GMA) / Intelligence

One of their main conclusions, aligned with previous research, was that general mental ability (GMA) tests alone are 65% valid. They also found that many selection measures that claim to evaluate something else than GMA fail to do so: their standalone statistical value when GMA is accounted for is often next to nothing. In other words, intelligence is a strong predictor of job performance and many selection methods that claim to test something else fail in practice because general intelligence influences the construct they aimed to measure.

Therefore, Schmidt and his colleagues wanted to find out which selection methods do evaluate something else than GMA and are therefore useful independently. Here are two charts, though similarly looking. The first shows the most effective selection tools for predicting job performance and the second for predicting training performance.

The value add of various selection methods when GMA is accounted for (JOB PERFORMANCE)

Selection Procedure% gain in validity over GMA
GMA tests65%
Integrity tests20%
Employment interviews (structured)18%
Employment interviews (unstructured)13%
Interests10%
Phone-based interviews (structured)9%
Conscientiousness8%
Reference checks8%
Openness to Experience6%
Biographical data6%
Job experience5%
Personality-based EI5%
Person-organization fit4%
SJT (knowledge)2%
Person-job fit2%
Assessment centers2%
T & E point method1%
Grade point average1%
Years of education1%
Extraversion1%
Peer ratings0%
Ability-based EI0%
Agreeableness0%
Work sample tests0%
SJT (behavioral tendency)0%
Emotional Stability0%
Graphology0%
Job tryout procedure0%
Behavioral consistency method0%
Job knowledge tests0%
Age0%

The value add of various selection methods when GMA is accounted for (TRAINING PERFORMANCE)

Selection procedure% gain in validity over GMA
GMA tests65%
Integrity tests16%
Biographical data11%
Employment interviews (unstructured)11%
Interests11%
Conscientiousness9%
Reference checks6%
Employment interviews (structured)5%
Years of education4%
Extraversion3%
Assessment centers2%
Peer ratings1%
Agreeableness1%
Emotional Stability0%
Openness to Experience0%
Job experience0%

Generally, pre-employment assessments ranked high on this chart. To me, it was a surprise that job personality tests ranked rather modestly on Schmidt et al.’s chart. The one pitfall we must be aware of when looking at these though, is that the researchers considered correlations with job performance irrespective of the role. Should we look at selection measures for sales roles, for example, we might find that assessing agreeableness is more valid for that purpose.

Your feedback

Questionmark does provide a GMA test, the Thinking Skills Assessment, and an assessment platform where you can conduct most of the selection measures listed above. However, I would be interested to find out from you: Does this data align with or contradict your beliefs about talent selection? Which selection methods have proved the most valid in your experience? Please take the freedom to answer on our LinkedIn pages and profiles – let’s start an interesting conversation!

Questionmark also provides a data literacy assessment, Questionmark Data Literacy by Cambridge Assessment, which enables employers to measure the level of data literacy skills among teams.

To learn more about Questionmark or our assessment content, please contact us.

aula Baciu, Assessment Content Manager, is part of the Content Team at Questionmark and is committed to developing and licensing ready-made test content that increases organizational performance. She has been working closely with representatives from Cambridge Assessment to publish the Questionmark Data Literacy by Cambridge Assessment and Questionmark Thinking Skills by Cambridge Assessment tests.