New York has new rules on AI hiring tools. Here’s how the changes might help or hinder your job prospects.
April 13, 2023, 4:08 p.m.
Employee advocates say that the city’s law is too weak, and employers say that confusion abounds.

After months of delay and debate, New York City has ironed out details for the nation’s first-of-its-kind law regulating hiring tools powered by artificial intelligence, from resume screeners to AI-powered video interviews. What they’ve arrived at could affect how, and if, you get hired for your next job.
Local Law 144, passed in 2020, requires employers in the city to notify NYC-based job candidates in advance that they use “automated” hiring tools and conduct a “bias audit” on the tool every year, measuring any disparate impact on people based on their race, ethnicity or sex.
The law was supposed to go into effect at the beginning of the year, but the city’s Department of Consumer and Worker Protection has repeatedly delayed enforcement as it determined what AI tools should qualify and what the bias audit should entail. The agency released its final rules last week and enforcement is now set to begin on July 5. The delays, however, have hardly eliminated uncertainty surrounding the new law. The final rules have stirred criticism and confusion among employers and employee advocates alike.
“I think it's a very stressful time inside of modern HR departments,” said Patrick Hall, principal scientist at the boutique law firm BNH.AI. “There's a lot of questions about whether they should turn off the (AI) tool or even use the automated decisioning tool.”
Because of all of the weaknesses, both of the law itself and the rules that have been proposed for it, it shouldn't be viewed as a model.
Matt Scherer, Center for Democracy and Technology
Worker advocates and tech watchdogs argue that the law has been so watered down since its passage that it is a far cry from the original. They claim it gives employers far too many loopholes to evade scrutiny, with one of their biggest concerns being that the tools subject to the law must be a major part of the hiring process. Meanwhile, employers, their attorneys and business organizations say that companies are still confused about what tools qualify and are stretched thin by the new requirements.
Labor attorneys and tech experts predict that the ambiguities in the law will be ironed out in future court cases. And they say since New York's regulation is novel, it may set a precedent for other major cities – a troubling possibility to labor interests across the ideological spectrum.
“Because of all of the weaknesses, both of the law itself and the rules that have been proposed for it, it shouldn't be viewed as a model,” said Matt Scherer, senior policy counsel for workers' rights and technology at the Center for Democracy and Technology, echoing a sentiment shared by some employer-side advocates. “It's unfortunate that it was the first to market, as it were, for a law governing automated tools.”
Here’s what you need to know right now:
What exactly is an 'automated' hiring tool?
Under the law, that’s any hiring tool that uses machine learning or AI – but also some other data analytics – to screen job candidates.
That could include programs that sift through resumes to choose top candidates, or a face-scanning algorithm from the recruiting technology company HireVue designed to measure personality traits during video interviews.
What do employers who rely on such tools now have to do?
Employers must notify New York City-based job candidates and employees who are up for promotions at least 10 business days before they'll be screened with automated hiring tools that are subject to this law — and must offer a way for them to opt for an alternative selection process. The notice required under the law could be in an email, a job posting, or “in a clear and conspicuous manner” on the employment section of a company’s website.
An outside auditor must also conduct a bias audit every year to evaluate whether the tool — even if it appears impartial — favors one racial, ethnic, or sex category over another, and a summary of the audit must appear on the company’s website.
Employer websites must also include information about the kind of data they’re collecting, the source it’s coming from, and how long they will retain the information – or inform applicants how they can request that information, which must be provided in 30 days.
“This is a pretty extraordinary level of transparency that they're forcing,” said Shea Brown, CEO of BABL AI, an AI research consultancy that performs audits. “People literally have to test for a very sensitive thing, and they have to make that public.”
What counts as a biased tool?
The law doesn’t define bias or require auditors to determine if a tool is biased, but the required analysis is similar to one that federal enforcement agencies use to determine if a hiring test is discriminatory. The key number is the “impact ratio,” which compares how a racial, ethnic or sex group ranks compares to the highest-ranked group under the tool. Under the federal regulations, if an “impact ratio” is less than four-fifths, or 0.8, then a hiring process is considered to have an adverse impact.
What are potential 'loopholes' in the law?
Tech watchdogs' biggest concern is that the law requires the tools to be a major factor in the hiring process: Employers must rely solely or primarily on the tool, or use it to overrule human choice. To avoid scrutiny, employers could argue that scores or outputs from automated tools are just one of several considerations, and real people ultimately make the final decision.
Even if the answer is blurry, some tech advocates worry, employers may argue the rules don’t apply to them, to avoid extra hassle. And there’s a legal disincentive: Data from the audits could become the subject of discrimination cases, multiple tech advocates and attorneys said. And even if the results are publicized, they could be discoverable in lawsuits, Brown said.
“Behind the scenes, there are a lot of people who are making plans essentially to justify why this doesn't apply to them,” Brown said. He later added: “If nothing else, even if they may think that it might be the right thing to do, they'll be strongly encouraged by their lawyers to not do it.”
What are companies doing to gear up?
Employers are scrambling to figure out if their tools are subject to the law, gathering information from companies that create their hiring tools, and getting their hiring data in order to complete their audits before the law takes effect, Hall said. Company leaders who think their tools fit the rules are hiring from the growing cottage industry of AI bias auditors – several of whom say they’ve seen a surge in demand for their services in recent months. But confusion still abounds about what tools are subject to the law, despite several rounds of rulemaking and delays in enforcement.
“The definition is not concrete enough for me to say it applies to X but not Y,” said Robert Szyba, a labor lawyer at Seyfarth Shaw LLP. “The grayer the rules, the less definite the requirements are, the harder it is to figure out what employers need to do.”
He later added: “You're going to find companies that have difficulties figuring out whether or not they have to comply and in good faith may make a mistake."
Some business executives may choose to ditch such tools altogether. A few large employers have considered basing and hiring employees in out-of-state headquarters, said Frank Kerbein, director of the Center for Human Resources at the state Business Council.
What concerns do business advocates have?
Besides fears of breaking the potentially squishy rules, human resources professionals and employer advocates worry that the regulations will be overly burdensome. They point to difficulties they’ve heard from other companies in finding auditors, privacy concerns around sharing data with auditors, and the tens of thousands of dollars they may need to spend to meet regulations. Auditors can’t be employed by the employer or company selling the hiring technology, according to the rules.
It’s just one more thing that adds to the perception of New York City, New York state being not business friendly.
Frank Kerbein, director of the Center for Human Resources at the state Business Council
“It’s just one more thing that adds to the perception of New York City, New York state being not business friendly,” Kerbein said, adding that a federal solution would be more appropriate.
Emily Dickens, the head of public affairs at the Society for Human Resource Management, said she worries that for small and midsize companies in particular, the added costs will disincentivize the use of advanced technology that can speed up the hiring process and, she argues, potentially even limit bias.
“We’ll have HR falling behind in innovation,” Dickens said.
What are the consequences for not following the law?
Employers not following the rules could be on the hook for a $500 penalty for the first infraction and $500 to $1,500 for subsequent offenses. Each day the tool is used in violation of the law — or fails to provide the proper notice to candidates — is a separate violation. The Department of Consumer and Worker Protection, which regularly fields workplace complaints, is tasked with enforcing the law.
Are there other concerns about employers evading scrutiny?
Other concerns raised by tech watchdogs center on what automated processes are – and, notably, aren’t – subject to the requirements. The law applies to tools used in “employment decisions,” which DCWP officials have interpreted to mean selecting or advancing candidates in the hiring or promotion process. That leaves out other important processes like targeted ads and recruiting messages, Scherer said.
Some critics also point to a lack of rigor in the audit process. Audits do not need to consider other federally protected classes ripe for bias, like disability, some advocates argue. And the audit also doesn’t have to test if the tools actually do what they’re designed to do – that is, accurately choose candidates who will do well on the job. That may lead to "fairwashing," giving the rubber stamp of approval to “nonsensical” hiring programs that are little better than a random choice, says Julia Stoyanovich, an NYU computer science and engineering professor who has studied how even the same hiring program can churn out vastly different results for the same people.
“We’re putting this smiley face, this check mark, on tools that are otherwise broken,” she said.
One thing both employers and employee advocates agree on: The answers to any remaining questions not addressed in the rules will likely be ironed out in court.
Report: NYC government needs stronger artificial intelligence oversight Human babies still show more common sense than AI … for now NYC delays enforcement of law aimed at identifying bias in AI hiring tools