Recruitment process supported by AI Risks of discrimination

So, following our recent article on proposed new regulations, it’s not just the EU that is looking askance at the potential risks of artificial intelligence in recruitment. Across the pond, we learn that the US Department of Justice has warned employers to take steps to ensure the use of AI in recruitment does not disadvantage job applicants. disabled, or face the pain of violating the Americans with Disabilities Act. The ADA already requires US employers to make the UK’s equivalent of reasonable adjustments to allow applicants with disabilities to participate fairly in the recruitment process. However, the ADA and the Equality Act were designed long before the widespread use of AI in recruiting. Therefore, there are concerns that automated decision-making originally designed to reduce the scope for subjectivity and bias may actually create new disadvantages for applicants with disabilities, typically by weeding out people who, because of their condition of health, do not correspond to the “ideals”. that the algorithm is looking for.

By way of example only, a candidate whose disability limits their manual dexterity or visual acuity may have difficulty taking a screen/keyboard test or application or handling the required interactions with a chatbot, especially under pressure. time. He would therefore be at an AI disadvantage even though the role he is applying for might not include hardware use of the keyboard or alternative technologies might be provided if he got the job that would allow him to work around the problem. An algorithm looking for suspicious gaps in CVs might successfully weed out those who have spent material time at Her Majesty’s pleasure, but perhaps also those who have had to take a career break for medical reasons but are otherwise perfectly suited to the job. role to be filled. Similarly, it will be illegal under the ADA and potentially Equality Act as well for AI to weed out someone who scored high enough to be advanced through the process if reasonable adjustments had been made. been brought. This is the case even if this candidate would not ultimately have obtained the position for other reasons. Video interview software that analyzes candidates’ speech patterns, facial expressions, or eye contact could easily have a disproportionate impact on employees with certain disabilities.

These are exactly the sort of concerns that underpin EU thinking about the risks of unfettered use of AI in recruitment and therefore the need for the proposed checks referenced in our blog. . It is therefore to be expected that an employer who entrusts their AI system to the proposed national approval body for their certificate of airworthiness will want to demonstrate as far as possible that they have solved this problem. This will mean either showing that its algorithm was trained not to choose based on such potentially illegal factors, or at a minimum that there are parallel safeguards to avoid any negative impact. This could mean, for example, a separate recruitment process for people who are reasonably concerned that their medical condition might cause them to underperform against system expectations, perhaps involving oral or physical interviews at the place or extended deadlines for the completion of tests or the remittance or modification of scores affected by the disability against criteria that have only peripheral relevance to the job for which they are hired.

However, guidelines issued by the US Equal Employment Opportunity Commission earlier this month make it clear that achieving reasonable accommodation will not include “lowering of production or performance standardsor drop necessary parts of a role just to make it more accessible. From there, it would seem to follow that there is no obligation for the employer to make their AI screening process less punctilious at all levels, not least because it undermines the essence of their existence by first place. Instead, the focus will increasingly need to be on taking steps to minimize the risk of resulting disadvantage by adjusting algorithm programming or, as above, a separate recruitment channel that does not necessarily allow lower standards, but gives candidates with disabilities the best chance to show they meet them.

Employers purchasing AI recruiting tools are therefore encouraged to quickly seek contractual assurances from manufacturers and vendors that their systems have been designed to work around these potential issues. Ultimately, however, the employer is unlikely to be able to pass any responsibility for discrimination onto the seller. Indeed, it is not strictly the operation of an AI system that can discriminate in this way that is illegal, but how the employer then uses the result of it and what measures it takes to prevent any bias. built into the AI ​​programming of causing a real disadvantage to applicants with disabilities.

© Copyright 2022 Squire Patton Boggs (USA) LLPNational Law Review, Volume XII, Number 140

Comments are closed.