Recruitment practices: is there a place for technology?

What matters

What matters next

Employers may be tempted to think that, compared to a human, using technology produces more accurate results faster and that it should be embraced at all costs. While tech can be a valuable aide, care is needed particularly when it comes to recruitment.

Can technology lawfully screen applications?

When it comes to recruitment, any way to quickly and accurately sort through applications to identify viable candidates is a benefit to employers. However, solely relying on technology to carry out this screening process carries with it several risks, as we discuss below. 

  1. Inaccurate results

    Setting absolute parameters for a screening process means that there is no scope for modifying the process to take account of nuanced or irrelevant information. For example, software may scan an applicant’s social media accounts to flag if an applicant has used unacceptable words and reject them on that basis when in fact the words were used for legitimate reasons. Similarly, the software may incorrectly identify the applicant in question and accept or reject them based on this. Checkr, which uses AI to provide background checks for employers, faced legal action when applicants who were flagged as having been convicted of various crimes in reality just shared a similar name. 

    Increased use of technology to screen applicants by analysing social media profiles, news websites and online behaviour may also give rise to employers having knowledge they would not have otherwise known about, such as criminal convictions. This may raise issues under the Rehabilitation of Offenders Act 1974, although an applicant who believes they were refused because of their spent conviction has limited means of redress. 
  2. Indirect discrimination

    Another difficulty with recruitment decisions that are based on AI is that an algorithm may well be applied consistently to each applicant, but the criteria and scope of that algorithm will be decided by a human. Therefore, there is a risk that bias (whether conscious or unconscious) trickles through the recruitment process, resulting in indirect discrimination against certain candidates. 

    For example, an algorithm that is programmed with data sets based on patterns in CVs of existing employees will look for similar candidates. If the existing workforce is predominantly male, then this is likely to mean the algorithm favours men and will inadvertently discriminate on the basis of sex.

    Similarly, if the AI is asked to sift out any job applicant previously rejected or dismissed from the employer, it is unlikely to be sophisticated enough to assess whether the previous reason for refusal or dismissal was legitimate and importantly, whether it is correct to refuse to progress the application again on the same basis. Not only could this result in suitable candidates being missed, again it increases the risk of both direct and indirect discrimination claims, for instance allegations that the employer has a practice of excluding certain employees. A successful blacklisting claim carries severe financial penalties for employers and can cause significant reputational damage. To train the AI to take a more sophisticated approach requires more investment in terms of time and money but is essential to try and mitigate risks. 
  3. Failure to make reasonable adjustments

    The use of technology in a recruitment process will also not absolve employers of the need to make reasonable adjustments for disabled candidates. Employers should investigate the limitations of what the adopted technology can detect and adapt as appropriate. For instance, if the technology is screening written or video responses, consider what additional steps might be required to ensure that disabled candidates are not disadvantaged. Software that is screening applications could inadvertently exclude someone experiencing issues arising from Tourette’s, dyslexia or dyspraxia or heightened anxiety. Employers need to be satisfied that if the software cannot moderate its response to account for disabilities, a human can intervene. Otherwise, not only is there a risk of indirect discrimination, but also a failure on the part of the employer to make reasonable adjustments.
  4. Data protection implications

    The UK GDPR restricts employers from making solely automated decisions that have a significant impact on job applicants and workers except in limited circumstances, such as where the decision is necessary for entering into or performing the employment contract or where the data subject has consented. Employers are unlikely to meet these exemptions and should therefore always ensure that there is human influence on the outcome in any employment decisions involving technology.

Is technology therefore the future for recruitment?

Technology can certainly assist employers, but it feels like we are a long way from a fully automated recruitment process. Human involvement is still key, and an ability to explain the rationale for a decision, whether in the form of giving feedback to a candidate or justifying a decision in an Employment Tribunal, is essential. Even if technology assists a recruitment process, when it comes to defending a claim in a Tribunal setting, the employer will still need individual employees to explain to the Tribunal how the technology was used, how that technology was programmed and how the decision was reached. 


This information is for educational purposes only and does not constitute legal advice. It is recommended that specific professional advice is sought before acting on any of the information given. © Shoosmiths LLP 2024.


Read the latest articles and commentary from Shoosmiths or you can explore our full insights library.