Today, employers frequently rely on AI technology for hiring and other employment processes. Using this technology has made these processes faster and easier. However, some question whether utilizing this technology can cause discrimination in employment decisions.
Employers must ensure this technology does not result in discrimination. With the rise of AI technology’s part in these processes, the United States Equal Employment Opportunity Commission (EEOC) intends to help prevent discrimination. This intention is because the EEOC vice chair believes employers can use this technology while protecting people’s civil rights.
In January, the EEOC had a public hearing to discuss employment discrimination in AI. During this hearing, employer representatives, computer scientists, legal experts, industrial-organizational psychologists, and civil rights advocates spoke about their concerns. Approximately 2950 individuals attended this hearing online.
According to the vice chair of the EEOC, Title VII of the Civil Rights Act covers the use of automated employment decision tools. However, not all areas depend on Title VII alone. For example, New York City restricts how employers can use automated employment decision tools. As of January 1, the city began requiring businesses to conduct “bias audits” before using automated employment decision tools. They also pushed companies to inform employees or job applicants that they intend to use these tools. Currently, the city decided not to enforce this law until April 15.
Intentionally or not, some companies have discriminated even when not screening for protected categories, such as race or gender. This discrimination happens when using proxies for discriminatory preferences. A common example of this is the use of credit histories or criminal records in background checks.
According to the Director of the American Civil Liberties Union, these background checks can discriminate against people of color, such as blacks or Native Americans, due to racial profiling or a history of redlining. He also said that zip codes or a college education could become proxies too.
An associate professor of law at a university in North Carolina shared a comment about automated hiring programs. According to the assistant professor, advertisements initially claim these programs can replicate an employer’s best worker. Unfortunately, this thinking also leads to repeat biases. For example, using a model employee with a name or characteristics more common among white people could lead to unintentional discrimination.
In this case, the system could dismiss people of color based on the name or characteristics of the model employee. To avoid this, the professor suggests that employers use variables that tend to predict successful job performance. This example emphasizes the necessity of employers preventing discrimination in their hiring process. One crucial step employers can take includes partnering with a trustworthy background check company. The right provider will ensure they do not unintentionally discriminate during the hiring process.