Businesses are always on the lookout for ways to be more efficient. “Work smarter, not harder,” the saying goes. That’s also true in hiring. To that end, many employers have begun using artificial intelligence in their hiring processes. While this might seem like an ideal solution both in terms of increasing efficiency and eliminating biases that result from the introduction of the human element, the reality is far murkier. Many forms of AI are far from perfect and their flaws make them far from unbiased. Sometimes those biases result in violations of anti-discrimination laws. If you think you’ve encountered that kind of hiring bias and been denied employment because of it, you should get in touch with a knowledgeable New Jersey employment discrimination lawyer to discuss your situation.
Most recently, the federal government put out the call to employers to beware when using algorithms and AI in their hiring processes. The U.S. Justice Department and the Equal Employment Opportunity Commission put out a guidance document on May 12 where they laid out ways that these automated systems can unfairly disqualify some people with disabilities.
For example, some employers use automated personality tests or other cognitive screening exams to assess particular “personality, cognitive, or neurocognitive traits.” The problem with these exams is that they potentially can cull people with “cognitive, intellectual, or mental health-related disabilities,” even though those people met the qualifications that the test was supposed to be analyzing and should not have been eliminated from consideration.
Eliminating the Wrong Candidates for the Wrong Reasons
How is that possible? It can take place in various ways. Some tests take the form of a type of video game and are intended to measure specific cognitive abilities and/or emotional intelligence. With these games, certain people with blindness, PTSD, or other disabilities related to vision or receiving visual stimuli might fail, even though they had all the traits that the employer actually was seeking.
Unfortunately, the potential problems with these automated procedures can go beyond disability discrimination and into race discrimination. For example, a University of Maryland information systems professor looked at the way some popular facial recognition technologies assessed emotions. (This matters in employment because some hiring teams may use these technologies to screen for certain desired emotional traits or screen out undesirable ones.)
As one example, the study compared the official “profile” pictures of two current professional basketball players: one Black, and one white. Both smiled, but neither smiled broadly. The Face++ technology gave both similar “smile scores” of between 48 and 49. Despite that, the tool rated the white player as significantly happier (59% to 39%), and rated the Black player profoundly angrier (27% to 0.1%).
Another thing to keep in mind when it comes to AI in hiring is that the processes are only as foolproof as the programming they receive. In the middle of the last decade, Amazon attempted to develop an automated protocol for reviewing and “scoring” employment applicants’ resumes. The algorithm “learned” (was modeled) by looking at successful Amazon applicants from the previous ten years.
Unfortunately, the tech industry from 2004 to 2014 was dominated by men, and that unconsidered implicit bias emerged in the technology’s preferences. The AI downgraded resumes that contained the word “women” (such as “Society of Women Engineers” or “captain of the Women’s Soccer Club”) and severely downgraded resumes if the applicant attended either of two women’s colleges. Amazon eventually scrapped the project.
AI and Intersectional Discrimination
In 2020, Harvard published a study looking at major facial recognition technologies in relation to gender and color. The report found that, in each of the five technologies reviewed, the technology’s accuracy dipped (and in four of the five, dipped dramatically) when analyzing women of color. In three of the five, the technologies performed with an accuracy of around 99% when reviewing light-skinned male faces (and two of those three were similarly accurate in looking at darker male faces,) but all three plunged to 65-70% accuracy when presented with a darker female face.
This matters because the law recognizes intersectional discrimination. That is to say, you can pursue and (if you prove your allegations) win an employment discrimination case in New Jersey based on discrimination that occurred not simply because you were a woman or were Black but specifically because you were a Black female.
For better or worse, AI and other new technologies are not going away, and they’re not leaving the area of recruiting and hiring. Whether your decision-maker was human or AI, the law still demands that you receive a hiring process free from discrimination based on race, gender, disability, etc. When you don’t, the experienced New Jersey employment discrimination attorneys at Phillips & Associates are here to help. To get our team started on your case, contact us online or at (609) 436-9087 today to set up a free and confidential consultation.