Hiring is an integral but time-consuming, expensive and often tedious process with which every company must contend. In looking for ways to cut down on exhausting searches, an increasing number of companies are turning to artificial intelligence (AI) systems to help more quickly identify qualified candidates. This can prove especially beneficial for firms that need to cut through a huge influx of potential applications.
However, there is growing concern that such AI application systems may be perpetuating some forms of discrimination, particularly age, gender and race. This is in spite of tech companies insisting that their systems are designed to root out long-existing human biases.
Last year, Amazon tested integrated machine-learning techniques on its recruiting platform. This was supposed to be a “smart tool” that could help managers pick ideal job candidates faster. However, after being fed a decade’s worth of resumes, the system began showing a clear bias toward male candidates. In troubleshooting, engineers with the company figured out that because most of the resumes were coming from male candidates, it made the “artificial intelligence” leap that male candidates were more desirable, and thus downgraded the ratings of female applicants. Engineers addressed this by editing the programs so that they included more gender neutral terms. However, that doesn’t mean these systems won’t still prove discriminatory – now and in the future. (Amazon decided to ax the project before fully launching it, perhaps realizing the potential legal liability landmine.)
It’s no leap to surmise similar discriminatory patterns could soon emerge.
If, for instance, a firm has a general tendency to hire fresh-out-of-college candidates, these systems could easily begin trending toward a younger workforce bias.
There are various levels of artificial intelligence applications for hiring/recruiting. Basic tools might include automated social media scraping tools or those that can match up some weak/average indicators of success with a firm. Then there are intermediate applications that might include simulations or tests that glean data directly from the applicant. Some tech employers, for instance, have applicants spend 20 minutes playing a neuroscience-based game to gauge things like focus, risk-taking, memory, contextual cue-reading skills, etc. Then there are advanced AI solutions that use algorithms designed to link the right recruits to a specific job posting.
So while companies may save some coin off the top by sparing their existing staff from sifting through a mountain of job applications by hand, they’d still need to invest in software development services to constantly monitor these processes. As our Los Angeles age discrimination attorneys can explain, the onus is on employers to ensure job applicants aren’t being discriminated against in violation of federal and state laws. If any protected group of individuals are being systematically discriminated against in the hiring process, the company could be held liable in an employment lawsuit.
So far as the current AI algorithms go, what many California employers may not understand is that you can’t simply put these systems on autopilot. Their success is truly contingent on an actual person sitting down and reviewing whether the system is behaving as it should and in a manner consistent with the protections outlined in civil rights and labor laws. The question is whether these monitors will even be able to determine whether something is amiss before it becomes a major legal headache.
There is significant ageism reported in the tech sector generally, which may be further cause for concern considering they are the ones building these systems.
Contact the employment attorneys at Nassiri Law Group, practicing in Orange County, Riverside and Los Angeles. Call 949-375-4734.
How A.I. Could Enable Ageism, Discrimination in Hiring, Oct. 3, 2019, By Nick Kolakowski, Dice.com