Artificial Intelligence (AI) tools are becoming commonplace in workplace. According to Equal Employment Opportunity Commission (EEOC) Chair Charlotte Burrows, “as many as 83 percent of employers and up to 99 percent of Fortune 500 companies now use some form of automated tool to screen or rank candidates for hire.”

Just as the use of AI in the workplace has developed rapidly, so too have the laws and regulations impacting workplace AI – particularly where there are concerns that irresponsible use could create a risk (or inaccurate impression) of unintended bias or discrimination. Some jurisdictions, like New York City, have already begun enforcing laws governing the use of AI in the workplace. Meanwhile, federal and state legislators have introduced similar bills, each aiming to address algorithmic discrimination and other automated employment decision-making.

Algorithmic Discrimination

The primary goal of many legislative efforts around AI is to prevent automated decision-making tools from differentially treating individuals or groups based on protected class status. In contrast to employment-focused regulations later discussed herein, the following proposed laws would, if enacted, apply broadly – impacting everyone from AI developers who design subject technologies, to employers or other deployers utilizing such tools for critical decisions.

In the United States Senate and House of Representatives, the proposed Algorithmic Accountability Act of 2023 (S. 2892; H. 5628) would direct the Federal Trade Commission (FTC) to require American businesses to complete impact assessments for the use of automated decision systems and augmented critical decision processes. These impact assessments would be published by the FTC in a publicly accessible repository. If enacted, the act would also create a Bureau of Technology within the FTC to aid in the enforcement of the act, among other duties.

In California, AB 2930 seeks to regulate the use of AI in a wide array of decision-making, including in employment, housing, family planning, healthcare, and other areas. If enacted, employers and other deployers would be required to complete impact assessments and establish governance proposals and policy disclosures in connection with their use of automated decision-making technologies. Additionally, employers and other deployers would be required to provide prior notice to any person subject to certain high-risk automated decisions using any subject technology.

Connecticut’s SB 2 similarly emphasizes risk management and applies to both developers and deployers of “high-risk AI systems.” AI system developers would be required to implement safeguards to shield consumers from risks of algorithmic bias, including by maintaining records of limitations and uses of the system, establishing governance practices for training datasets, and detailing methods for system oversight by deployers of the technology. The bill requires employers or other deployers of a subject AI system to exercise “reasonable care” to prevent algorithmic discrimination. SB 2 offers a safe harbor for compliance with specific provisions and requires the establishment of a risk management policy and execution of an impact assessment, requiring both developers and deployers to publicly disclose risk mitigation strategies. The bill further requires reporting of any actual or potential discrimination to the state attorney general.

Illinois’ HB 5322 defines key terms related to AI and automated decision-making. Deployers and developers of automated decision tools would be required to perform and document annual impact assessments, which must be made available to the Attorney General upon request, but which would be exempt from FOIA requests. Developers must also provide deployers with a statement outlining the intended uses of the tool, its limitations, any foreseeable risks of algorithmic discrimination, data types used for programming or training, and a description of how the tool was evaluated for validity and the ability to be explained before sale or licensing. The bill contains an exemption for deployers with fewer than 50 employees, unless the tool they deployed impacted more than 999 people in the previous year.

In contrast to these active legislative efforts, bills in Rhode Island (HB 7521), Virginia (HB 747), and Washington (HB 1951) have experienced setbacks. Rhode Island’s bill has been placed on hold for additional study, while Virginia and Washington’s proposed legislation will not progress this legislative session. These stalls reflect a “wait and see” approach, possibly indicative of the intricate balance legislatures are seeking between technological innovation, burdens on business, and effective regulation.

Automated Employment Decision Making

Closely related, yet distinct from general automated decision-making regulations are those concerning Automated Employment Decision Tools (AEDTs). These regulations aim to shape the AI tools used for hiring, firing, promotion, and compensation decisions, among other particularly sensitive areas of employment-oriented decision making.

Federal AEDT-Related Developments

From the United State federal government, the EEOC has issued guidance providing insight into Title VII’s application to algorithmic decision-making tools used in hiring, promotion, and termination procedures among other employment decisions. The EEOC guidance clarifies that AEDTs are subject to the same regulations as traditional employment selection processes, and should not cause a disproportionate adverse effect on employees or applicants based on race, color, religion, sex, or national origin, in accordance with traditional principles for evaluating disparate or adverse impact.

The EEOC guidance further advises that employers are responsible for any discrimination resulting from the use of AEDTs, even if such tools are created or administered by third-party vendors. The Commission highlighted the Guidelines on Employee Selection Procedures under Title VII, focusing on issues such as selection rates, the “four-fifths rule,” and the importance of a tool being job-related and consistent with business necessities.

In Congress, a select group of Democrats in the Senate have proposed several bills, including The No Robot Bosses Act of 2023 (S. 2419), which seeks to ban the exclusive reliance on automated decision systems for employment-related decisions, requires pre-deployment testing for discrimination and biases, and human oversight in reviewing system outputs. Additionally, the bill includes provisions for mandatory training on the operation of AEDTs and would require employers to make certain disclosures in connection with the use of AEDTs. Furthermore, the Act would create a new division within the Department of Labor – the Technology and Worker Protection Division – to oversee the implementation of these regulations.

Another bill from the same core group of senators, the “Exploitative Workplace Surveillance and Technologies Task Force Act of 2023” (S. 2440) provides for the creation of an interagency task force designed to study and evaluate employer surveillance and workplace technology use, including the use of automated decision systems in the workplace. The task force would produce an initial report on its findings, with more detailed interim reports to be produced over subsequent years, culminating in a final report with recommendations. After presenting its findings, the task force would dissolve, leaving behind its proposed framework for addressing workplace surveillance and the use of AEDTs.

Similarly, the Stop Spying Bosses Act of 2023 (S. 262) would require employers to disclose workplace surveillance activities, would prohibit the collection of sensitive data particularly when it could affect the privacy rights of workers, and would establish guidelines for automated decision-making systems in employment. The bill also provides for the establishment of a Privacy and Technology Division within the Department of Labor to enforce regulations and adapt to new technological advances in workplace surveillance.

In New York City, Local Law 144 has been in effect and subject to enforcement since July 2023, and prohibits the use of AEDTs without a prior bias audit, reviewed by an independent auditor, to measure any discriminatory effects based on federally reportable demographic groups. A summary of the audit must be published on the employer’s or employment agency’s website. Additionally, entities must notify affected New York City-based individuals at least ten days in advance of the application of such tools, providing specific details about the assessment process and their data policies. Employees also have a right to request additional information about data collection and retention, which must be fulfilled within 30 days. The law expressly disclaims any construction to limit the availability of a private right of action for employees or job candidates, or to limit the authority or enforcement powers of the Commission on Human Rights.

State AEDT-Related Developments

Coming down the pipeline from state governments, New Jersey’s legislative proposals (A3854, A3855, A3911) are illustrative of the emerging concern surrounding AEDTs. Bills A3854 and A3855 aim to restrict the sale or usage of AEDTs unless they have been reviewed by independent bias audits, and the summaries of these audits are made available to the public. Additionally, A3855 requires that employers notify candidates about the use of these tools and provide them with details on alternative evaluation methods. Currently, both bills are under consideration by the Assembly Science, Innovation and Technology Committee.

New Jersey’s A3911 focuses on the use of AI in video interviews during the hiring process. The bill would require employers to inform applicants that AI analysis may be utilized in video interviews, explain the AI’s function and evaluation criteria, and secure the applicants’ written consent prior to the interview. Alongside these requirements, A3911 also requires employers to gather and submit demographic information to the New Jersey Department of Labor and Workforce Development.

Maryland’s HB 1255 would establish an outright prohibition on the use of AEDTs unless the compliance requirements under the act are satisfied. Such requirements include completing impact assessments to confirm the use would not result in any “high risk actions,” including potential discrimination or other unlawful disparate impacts on the basis of an actual or perceived characteristic. Furthermore, the bill enshrines job candidates’ right to be notified of the use of a subject tool in connection with an assessment of the candidate’s qualifications or characteristics.

Similarly, in New York, A9314 would prohibit the use of automated tools in employment decisions without impact assessments. The findings from these analyses could be kept internal and protected by privilege. Nevertheless, a summary of each assessment would be made publicly available on the employer’s website before the tools are used and would be additionally submitted to the state Department of Labor, made available in a public registry, and distributed to subject employees. Such assessments would be required to measure the likelihood of adverse effects on groups based on sex, race, ethnicity, or any other protected class as recognized by state law. The act would become effective immediately, if passed.

Separately, New York’s proposed A9315 takes a more expansive approach and imposes comprehensive conditions on employers’ use of AEDTs and electronic monitoring. Employers would be required to use an external independent auditor to conduct annual bias audits of AEDTs, and to notify employees before using such tools. In cases of evident disparate treatment indicated by an audit, employers must address the issues and report their remedial actions. The bill further prohibits relying exclusively on AEDTs for employment decisions and requires adequate human review to supplement such tools. It also mandates that employees cannot be forced to undergo decisions derived from AEDTs without the option to request a review of the tool’s outcomes.

New York’s A9315 also proposes strict regulations for electronic monitoring of employees, necessitating a legitimate purpose for the monitoring, comprehensive pre-collection notification to employees, and destruction of data after fulfillment of its initial purpose or upon termination of employment. Vague language in monitoring notices, such as suggestions that monitoring “may” occur, would be explicitly prohibited. Enforcement of A9315 would be conducted by the New York Attorney General, with a private right of action extended to affected employees. Civil actions for violations would be subject to joint and several liabilities for any involved parties, including employers, vendors, or developers of the AEDTs or monitoring tools.

Last, Vermont’s H.114 proposes restrictions on the use of electronic monitoring and automated decision systems in employment. The bill sets strict conditions under which employers can engage in electronic monitoring, requiring it to be necessary, minimally invasive, and limited both in scope and duration. Employers must also inform employees of the monitoring 15 days in advance, including details on how the data will be used and protected.

Furthermore, the Vermont bill imposes stringent limitations on the use of automated decision systems for employment-related decisions, prohibiting practices that could result in discrimination or violate employee rights. Employers would be required to ensure that decisions informed by these systems are corroborated by human oversight and that any health-related outputs are not used in employment decisions. The bill would further require employers to comply with data access requests from employees, to conduct impact assessments prior to using subject systems, and to update assessments with any significant changes. Facial, gait, and emotion recognition technologies are explicitly banned from use under the act. In its current form, the bill is subject to the same enforcement provisions as existing labor laws in Vermont, which include a private right of action for employees.

The Road Ahead: Preparing for Compliance

While it is unlikely that all these proposed bills will pass, they highlight a common legislative intent to ensure AI systems, particularly those used in employment settings, are developed and deployed with a vigilant eye on privacy, ethics, and equity. The emphasis on periodic impact assessments, the management of bias risks, and the provision of alternative processes underscores a regulatory trend toward heightened accountability and transparency.

For employers and practitioners, the ramifications of these impending regulations are clear. Organizations must be proactive in tracking employee notification requirements, incorporating rigorous AI oversight mechanisms, conducting impact assessments, and ensuring meaningful human intervention in the use of AEDTs and other AI. Moreover, a robust documentation and disclosure strategy must be an integral part of deploying AEDTs to avoid litigation and regulatory penalties.

Many of the contemplated statutory requirements (e.g., impact assessments) are not “new” and exist under currently effective or enacted state general privacy laws, demonstrating that privacy compliance efforts today can save substantial effort with AI compliance tomorrow. Employers can capitalize on the advantages and efficiencies of using AEDTs while effectively mitigating risk by actively monitoring new regulations and partnering with informed counsel to effect vigilant compliance.