The rapid emergence of AI is helping hiring organizations work faster and more efficiently, connect with qualified candidates, and boost talent engagement. But despite these benefits, there is legitimate concern that this new technology can lead to discriminatory hiring practices.
In order to ensure fair and equitable employment opportunities for all, government leaders at the federal, state, and local levels are considering—and in some cases have already passed—laws that regulate the use of AI technology in recruiting and hiring.
In this blog post, we’ll provide an overview of the various laws that are in effect and the ones currently being considered by legislators. We’ll also highlight actionable tips talent teams can follow to not only adhere to AI recruiting laws but also ensure their hiring practices are free of bias.
But before we get into it, we need to start with a short disclaimer. While we’ve done our best to summarize what each law entails, this blog post is for informational purposes only. It’s highly recommended you consult your legal counsel to truly understand how these laws impact your organization and what steps need to be taken to remain compliant.
There are currently no federal laws that specifically address the use of AI technology in hiring. However, a handful of existing laws prevent employment discrimination, meaning that organizations could be liable for unfair hiring decisions influenced by an AI-powered algorithm.
Specifically, Title VII of the “Civil Rights Act of 1964” prohibits employers from discriminating based on race, color, religion, gender, or national origin. If an AI platform disproportionately screens out candidates in any of these protected groups, it could be deemed a violation of the law—just as a biased decision made by a human would be.
In fact, the Equal Employment Opportunity Commission (EEOC) stated in May 2023 that employers are accountable for any hiring decisions made by an algorithmic decision-making tool and cannot place blame on the software vendor as a legal defense.
Additionally, the “Americans with Disabilities Act (ADA) of 1990” prohibits discrimination against qualified individuals with disabilities. This not only means organizations must ensure that AI hiring tools do not screen out people based on disabilities. The law also requires them to provide reasonable accommodations if a recruiting platform is inaccessible to a disabled candidate (for instance, if an online assessment is challenging for a visually impaired applicant to complete).
Lastly, the “Age Discrimination in Employment Act (ADEA) of 1967” protects workers over the age of 40. An AI system that filters out older applicants—even indirectly, such as by using graduation dates—could be seen as an ADEA violation.
The key takeaway is that no federal law explicitly regulates AI in hiring but longstanding anti-discrimination laws broadly apply to this emerging technology. Organizations are fully responsible for the decisions or recommendations made by their AI tools—just as they are for the actions of any human recruiter or hiring manager.
While there are no specific AI-hiring laws on the books at the federal level, lawmakers and regulators are considering new measures. Here are a few of the notable federal proposals that recruiting professionals and hiring organizations should be aware of:
Without a specific federal law covering AI recruiting technology, several states have stepped in to fill the gap. Even if your organization doesn’t physically operate in any of these states, you should still be aware of these laws. If you have a remote workforce, it’s safe to assume these laws apply to you when considering candidates in any of these states (again, you should consult your legal counsel to make sure that’s the case). And even if you don’t operate in the handful of states that currently have or are considering legislation, there's a chance your state government will introduce some sort of law sooner rather than later.
Illinois was the first state to pass a law regulating AI use in hiring. The “Artificial Intelligence Video Interview Act” requires hiring organizations to take multiple steps before using AI to evaluate candidates during virtual interviews:
Under this law, if the candidate does not consent, the organization cannot move forward with the AI-powered interview evaluation.
Illinois recently expanded its civil rights law to directly address AI discrimination. When the new provision takes effect in 2026, it will be illegal for an employer to use AI in any employment decision (recruiting, hiring, promotions, or terminations) if the technology produces discriminatory results against a protected class.
Even more, this law takes steps to prevent discrimination through proxy. For example, organizations cannot weigh factors like zip code or graduation year in employment decisions (as those factors can inadvertently impact people of specific backgrounds).
And similar to other laws, Illinois’s law requires organizations to notify candidates and employees when AI is being used to make decisions regarding their employment.
Maryland has a law that requires hiring organizations to obtain an applicant’s written consent before using face-scanning or emotion-detecting AI technology during virtual interviews. If the candidate does not opt in, the organization is prohibited from using the facial recognition solution.
Colorado was the first state to pass a cross-industry AI accountability law. The “Consumer Protection in AI Act” takes a “risk-based” approach to regulating AI systems, and technology used for employment decisions falls into the “high-risk” category.
-It requires employers to “use reasonable care to avoid algorithmic discrimination.” Hiring organizations can comply by performing regular assessments to evaluate their AI systems for potential discrimination or bias and by clearly documenting these efforts.
In all likelihood, we’ll see a wave of other states pass AI-focused recruiting and employment laws in 2025 and beyond. Here is an overview of the states that are currently considering legislation and what their proposed laws would entail.
Lawmakers have introduced the “Artificial Intelligence, Machine Learning, and Automated Decision-Making Accountability Act” (also known as the “New York AI Consumer Protection Act”). This bill would require businesses that use AI or automated systems for critical decisions—such as hiring, housing, or lending—to actively assess and mitigate potential discrimination.
Additionally, the law would require clear disclosure to individuals when automated systems are used and organizations to perform annual impact assessments.
California has also been exploring regulating the use of AI in hiring. In 2022 and 2023, bills aiming to prohibit algorithmic discrimination in employment and mandate impact assessments and applicant notifications were introduced. However, no law has yet been passed.
Additionally, California’s Civil Rights Department (CRD) introduced potential regulations in June 2024 that treat discriminatory use of AI in hiring as a violation of existing civil rights laws. Under this proposed rule, AI hiring tools that negatively affect protected groups would be prohibited. It would also require both employers and AI vendors to keep detailed records and conduct regular bias audits.
A bill currently making its way through the New Jersey legislature would require hiring organizations to inform candidates whenever AI solutions are used. Furthermore, it mandates that automated hiring systems pass a bias audit before implementation.
Pennsylvania lawmakers have proposed an amendment to the state’s “Human Relations Act” that specifically applies to AI hiring technology.
If the law were to pass, hiring organizations would be required to notify candidates when automated decision-making systems are used, obtain consent before using such tools, and ensure the platform has passed a bias audit within the preceding year.
Connecticut is currently considering an algorithmic discrimination bill. The potential law would require hiring organizations conduct thorough assessments of AI hiring technology to identify potential biases, as well as inform candidates when the solution is used in evaluations and possibly to offer the option to opt out and undergo an alternative assessment method.
Additionally, governments in Massachusetts, Hawaii, Washington, Georgia, Oklahoma, Rhode Island, Vermont, and Washington D.C. have all recently introduced legislation aimed at regulating AI hiring tools (although details are sparse and the bills have a long way to go before becoming laws).
In addition to state governments, some city councils have begun exploring local laws that apply to AI-assisted hiring. The most notable example so far is New York City’s “Local Law 144,” which took effect in 2023.
New York City’s “Local Law 144” was one of the first in the U.S. to regulate the use of AI in employment decisions and became the model for many of the others we’ve previously highlighted. It imposes strict requirements on any employer or employment agency that uses an “automated employment decision tool” (commonly referred to as “AEDT”) to evaluate NYC-based candidates or employees. In order for hiring organizations to use an AI hiring solution, two conditions must be met:
The tool must have undergone and passed a bias audit conducted by an independent agency within the preceding year. The audit is required to ensure the AEDT does not have a disparate impact on individuals based on race/ethnicity or gender.
Additionally, any organization using the tool must publish a summary of the most recent bias audit results (including the date of the audit) on their website before implementing it in their recruiting and workforce management processes. This requirement essentially forces organizations to measure and publicly disclose the tool’s accuracy and fairness on an annual basis.
Employers must inform any candidate or employee residing in New York City that they’ll be assessed by an AI solution, at least 10 business days before the evaluation takes place. The notice must go as far as to specify the job qualifications and skills that the individual will be evaluated on.
Furthermore, candidates and employees have the right to request an alternative evaluation process or other accommodation. The law also requires that candidates be told how they can request an explanation of the assessment or the data collected by the tool.
In response to NYC “Local Law 144,” Sense successfully completed a bias audit conducted by Holistic AI. Learn more about how we help our customers comply with the law.
As of now, no California city has a specific ordinance focused on AI in recruiting equivalent to New York City’s law. However, major cities like Los Angeles and San Francisco do have broad anti-discrimination and fair hiring laws that generally apply to AI-driven hiring, just as they do to traditional employment practices.
Looking ahead, employers should keep an eye on other large municipalities. Local governments—especially in tech hubs or big labor markets—may follow New York City’s lead and pass AI hiring regulations. In the meantime, the general principle is that local anti-discrimination and privacy laws apply to any decisions made or influenced by AI.
Understanding the laws surrounding AI in recruiting and hiring is critical, but even more important is ensuring your organization’s hiring practices are fair and equitable. Here are actionable strategies your organization can take to comply with anti-discrimination laws when using technology:
The Sense AI-powered Talent Engagement Platform includes numerous features that empower hiring organizations to meet their recruiting goals—in a fair, equitable, and legally compliant manner. We would love to show you what Sense can do and answer any questions you have about compliance. Get in touch to learn more.