Regulators Eyeing Algorithmic Discrimination in U.S. Workplaces

Across the country, at all levels of government and industry, artificial intelligence (AI) is the source of intense focus as the machine learning technology has advanced leaps and bounds in just a couple of years. AI uses specialized tech to write and learn algorithms. It’s more than just a single program, and its uses are vast, with applications in everything from writing essays to self-driving big rig trucks to criminal case sentencing. Obviously, this has MANY implications for employers and employees. One that has captured the attention of government regulators and anti-discrimination lawyers is the use of algorithms in hiring. California algorithmic discrimination

Creators and proponents of AI will tell you that this sort of technology used in hiring helps eliminate the pitfalls of innate human biases to actually reduce instances of employment discrimination. But as our Los Angeles employment discrimination lawyers can explain, that doesn’t tell the whole story.

Because AI technology relies on machine learning, what is put into it will dictate the results it outputs. So if the information going in is even slightly coded for biases in gender, race, nationality, ethnicity, religion, etc., the results are going to perpetuate those biases – and possibly even compound them. This can be intentional, but from what we’ve seen, it’s largely unintentional. But good intentions don’t change the adverse impact.

And this issue now has a name: Algorithmic Discrimination.

Just last year, the Equal Employment Opportunity Commission issued guidance workplace algorithmic discrimination and promised to be proactive in getting ahead of the issue so that workplace policies can keep place with the technology. It’s called the Artificial Intelligence and Algorithmic Fairness Initiative, and it encourages industry self-regulation for companies using AI for recruiting and hiring.

Are There Any Algorithmic Discrimination Laws?

Employers in California and beyond are increasingly using AI to:

  • Present job ads to targeted groups.
  • Determine whether applicants meet the qualifications for the position.
  • Hold online video interviews.
  • Measure an applicant’s skills or abilities.
  • Score resumes.

But despite becoming more ubiquitous in the workplace, at this point, there are no federal employment algorithmic discrimination laws on the books. (Neither does California. A bill was introduced earlier this year that would have compelled employers using AI in recruiting/hiring to assess the risk of algorithmic discrimination, but AB 331 died in committee.)

However, the EEOC did release guidance on current best practices to avoid algorithmic bias – not just in hiring, but in medical exams and workplace decisions. The guidance relies heavily on the framework of protections outlined in the Americans with Disabilities Act. The guidance generally warns against:

  • Using AI tools that might discriminate on the basis of disabilities because the algorithms aren’t trained to account for such disparities.
  • Not offering reasonable accommodations to applicants and employees when they use AI tools for making decisions. Individuals with conditions like cerebral palsy, epilepsy, blindness, or autism might have difficulty using the type of AI that employers are requiring them to in the recruiting/hiring process. That needs to be accounted for. And employers can be held liable for the impact of any discriminatory hiring technology they use, even if it was unintentional and even if the tech belongs to another company.

An example of unintentional algorithmic discrimination would be if the AI tech is programmed to predict who will make a good employee by comparing applicant attributes to successful employee attributes. However, that approach has a blind spot: Individuals with disabilities have long been discriminated against and wrongly excluded from many jobs, and therefore are less likely to be represented among past/current successful staff.

Companies are allowed to consider qualifications based on what is required in the job and what’s necessary to the business, but they can’t discriminate for disabilities. They also must provide reasonable accommodations if doing so would allow an otherwise qualified candidate to do their job effectively.

In addition to the EEOC guidance, the Biden Administration last fall issued a Blueprint for an AI Bill of Rights, which specifically addresses algorithmic discrimination and a myriad of strategies to avoid it.

Although there is no force of law behind this guidance (yet), as employment attorneys in Southern California, we expect that they will be relied upon when complaints and lawsuit about machine biases eventually start cropping up. Already, we’re seeing some states introducing their own policies and regulations. The California Civil Rights Council last year gave the green light for approval of proposed legislation to oversee fairness in AI tech used for hiring. Other initiatives include measures in Maryland and Illinois limiting video or facial recognition during interviews. In New York City, employers face civil fines if they’re using any sort of AI hiring tool that doesn’t comply with certain anti-bias rules.

We think it likely that employers are going to try policing themselves voluntarily in order to avoid hardline legislative rules. How successful they’ll be with that remains to be seen.

Contact the employment attorneys at Nassiri Law Group, practicing in Newport Beach, Riverside and Los Angeles. Call 714-937-2020.
Additional Resources:
Auditing employment algorithms for discrimination, March 12, 2021, By Alex Engler, Brookings Institute
More Blog Entries:
How Employers Can Prevent California Workplace Retaliation, June 16, 2023, Los Angeles Employment Discrimination Lawyer Blog
Contact Information