In theory, AI enables a more efficient, targeted, and cost-effective hiring process by helping companies sift through volumes of candidates.
In reality, AI can promote biased hiring due to a reliance on unconsciously prejudiced selection factors like demography and language patterns.
Specifically, predictive AI can perpetuate the status quo because it’s often modeled on biased data sets.
Businesses must address the biases in predictive recruiting tools to create a wholesome and effective recruiting program.
Use this article to learn how to mitigate the risk of unconsciously biased hiring with AI tools.
AI Recruiting Biased Toward Demography and Language Patterns
AI is pattern-recognition technology that can reflect the unconscious biases of its human creators.
Machine learning algorithms are programmed by data sets, meaning that any biases in data will be mirrored in the end-product.
Basically, if data has human origination, AI tools that leverage it can reflect bias.
For example, Amazon developed a recruiting tool that unknowingly penalized the term “women’s” or the names of women’s colleges in applications.
The AI’s unfair preference for male applicants wasn’t malicious. Amazon’s algorithm was simply observing patterns in desirable applicants from the past 10 years – most of whom were male.
“What can naturally happen is you build a model that identifies common characteristics of your current workforce, which isn’t diverse,” said Upturn managing director Aaron Reike in a Business Insider article.
“[This model] might reflect the fact that hiring managers have traditionally given preference to male candidates over women,” Reike said.
Predictive hiring tools first screen then rank job applicants based on a range of demographic factors including:
- Former employers
Bias appears when AI screens and downranks applicants whose demographic traits – however irrelevant to the position – differ from those in the original data set. This bias disadvantages minority groups who’ve typically been underrepresented in the workplace.
Similarly, predictive tools can punish job seekers whose linguistic patterns diverge from the baseline.
In Amazon’s case, the hiring tool emphasized the use of masculine verbs like “executed” and “delivered,” which are used less frequently by female applicants.
AI tools also can punish the use of dialects, and thus, candidates from geographical areas that don’t overlap with the current workforce.
Hiring managers, by comparison, prize unique language to be a key indicator of a candidate’s personality and intelligence.
To avoid unconscious discrimination in the recruiting process, businesses must develop AI programmed with large, diverse data sets. If these technologies aren’t available, companies should consider alternative methods for screening and hiring candidates.
How To Address AI Bias In The Hiring Process
Experts say that businesses should take a comprehensive approach to recruitment instead of relying on predictive technology.
Though AI accelerates and cheapens the screening process, companies risk reproducing biases and perpetuating the status quo. Given AI’s reliance on previous data, the technology can’t yet be made impartial enough to be a standalone resource.
Meanwhile, rank-ordered lists of candidates can unduly impact hiring managers while overstating marginal or unimportant distinctions between candidates of similar qualifications.
Three ways businesses can mitigate the risk of AI’s unconscious bias during recruitment include:
- Candidate masking
- Bias-detecting technology
- Cutting AI out of recruitment
Candidate masking is hiding the personal characteristics of a candidate from hiring managers, including age, ethnicity and other factors that can trigger bias. This helps companies to evaluate candidates solely on the basis of merit and potential.
Hiring managers can use predictive tools to screen candidates with “candidate masking” active, and once without.
Comparing the two results can reveal points of bias in the technology, suggest standout candidates, and help hiring managers to make an informed, deliberate choice.
IBM is designing automated bias-detection algorithms that mimic the anti-bias processes in the human mind. Aimed at offsetting the biases of current AI, this technology could greatly enhance the validity of predictive recruiting technology going forward.
Don’t Rely On AI
For best results, hiring should be a cumulative series of small decisions.
Businesses can use AI to estimate a candidate’s suitability or predict the reach of a job posting, but the rankings provided should be but one point of consideration. Businesses should use predictive recruiting tools sparingly, and remember that these tools can perpetuate unconscious biases.
Addressing AI Biases In Recruiting
Businesses must recognize the limitations of machine learning and make deliberate, fair hiring decisions.
Ironically, the use of supposedly impartial technology actually heightens the risk of undesirable “mirror image recruiting.”
In the future, predictive AI may be modeled on larger and more diverse data and thus better able to generate trustworthy hiring criteria.
Grayson Kemper is a Content & Editorial Manager at Clutch, a data-driven research, ratings, and reviews platform. Using Clutch, businesses can vet and make informed decisions about the best services and solutions providers for their needs.