Skip to content
Tiatra, LLCTiatra, LLC
Tiatra, LLC
Information Technology Solutions for Washington, DC Government Agencies
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact
 
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact

Workers challenge ‘hidden’ AI hiring tools in class action with major regulatory stakes.

Workers are getting fed up with AI-based hiring practices.

A new class action lawsuit filed in California alleges that human candidates are being unfairly profiled by “hidden” AI hiring technologies that “lurk in the background” to collect “sensitive and often inaccurate” information about “unsuspecting” job applicants.

The suit specifically targets Eightfold AI, claiming that tools used by the company should be regulated in the same way as credit report bureaus are via The Fair Credit Reporting Act (FCRA) and state laws based on it.

The case could have broad-reaching implications for the increased use of AI in hiring.

“This lawsuit is a pivot point,” said Sanchit Vir Gogia, chief analyst at Greyhound Research. “It tells us that AI isn’t just being scrutinized for what it does, but for how it does it and whether people even know it’s happening to them.”

Violating the 55-year-old FCRA

The suit was filed in the Superior Court of California by New York City-based law firm Outten & Golden LLP, on behalf of Erin Kistler and Sruti Bhaumik. The plaintiffs claim they were barred from employment on several occasions by companies using AI-based hiring tools.

The class action complaint asserts that Eightfold AI violated federal and state fair credit and consumer reporting acts and unfair competition laws by collecting data on applicants and selling reports to companies for use in employment decision-making. These practices “can have profound consequences” for job-seekers across the US, the lawsuit claims.

Eightfold markets itself as the “world’s largest, self-refreshing source of talent data” and incorporates more than 1.5 billion global data points, including job titles and worker profiles across “every job, profession, [and] industry.” It counts among its customers corporate giants including Microsoft, Morgan Stanley, Starbucks, BNY, Paypal, Chevron, and Bayer. 

The suit claims the Santa Clara-based company’s proprietary large language model (LLM) and deep learning-based technology analyze data from public resources including career sites, job boards, and resumé databases such as LinkedIn and Crunchbase. It also culls information from social media profiles, applicant locations, and behind-the-scenes tracking tools. None of these personal data points are ever included in job applications.

AI algorithms then rank a candidate’s “suitability” on a numerical scale of 0 to 5, based on “conclusions, inferences, and assumptions” about their culture fit, projected future career trajectory, and other factors. This method is intended to create a profile of the candidate’s “behavior, attitudes, intelligence, aptitudes, and other characteristics,” according to the lawsuit.

However, these reports are “unreviewable” and “largely invisible” to candidates, who have no opportunity to dispute their contents before they are passed on to hiring managers, the plaintiffs argue. “Lower-ranked candidates are often discarded before a human being ever looks at their application.”

This method of report creation violates longstanding FCRA requirements, and there is no stipulated exemption for AI use, according to the suit.

The FCRA broadly defines consumer reports as any written, oral, or other communication from a consumer reporting agency that includes information on a person to determine their access to credit and insurance, as well as for “employment purposes.” According to the lawsuit, this definition covers reports that contain information on “habits, morals, and life experiences.”

Plaintiffs argue that, while automated screening technology did not exist when the FCRA was established in 1970, lawmakers at the time expressed concern about growing accessibility to consumer information through computer and data-transmission techniques, and that “impersonal blips,” inaccurate data, and analysis by “stolid and unthinking machines” could unfairly bar people from employment.

Thus, the lawsuit argues, agencies like Eightfold must disclose their practices, obtain certifications, and give consumers a mechanism to review and correct reports. “Large-scale decision-making based on opaque information is exactly the kind of harm the statute was designed to address.”

Neither the lawyers for the plaintiffs nor for the defendants responded to requests for comment. The Society for Human Resource Management (SHRM) also declined to comment.

Defensibility becomes the new bar

This lawsuit exposes a “governance failure” and “fundamental accountability gap,” noted Greyhound’s Gogia.

And it’s not the first, nor will it likely be the last; HR company Workday, for instance, is facing a lawsuit alleging that its AI-powered hiring tools make decisions based on race, and also discriminate against older and disabled applicants.

If courts agree that AI evaluations function like credit reports, hiring will be pushed into regulated territory, Gogia noted; this means CIOs must establish clarity and set rules around notification, transparency, audit rights, and contestability.

“If your hiring tools operate like decision engines, they need to be governed like decision infrastructure,” he said. And when they influence employment decisions, enterprises will have to prove they’ve done their homework. This means showing the logic behind a model, understanding data provenance, and being able to explain why an applicant was rejected and the processes they have in place to correct bad calls.

“Defensibility will become the new bar,” said Gogia.

Where AI hiring helps, where it hurts

That’s not to say that AI can’t be valuable in hiring; many real-world examples have proven that it can. The Human Resources Professionals Association, for one, points to successful use of AI in initial talent sourcing, screening, and assessment, while AI scribes can quietly take notes, helping recruiters focus more intently on candidate discussions.

Gogia agreed that AI can filter and rank large applicant pools, automate repetitive HR tasks, and identify overlooked candidates within internal databases. This means hiring teams can move faster, hone their focus, be more consistent, and reduce friction.

“But the moment AI moves into judgement territory, things get messy,” he emphasized. Scoring personality traits, predicting future roles, or evaluating the quality of a candidate’s education are all “subjective inferences dressed up as mathematical objectivity.”

Gogia advises clients to insist on human-readable evidence from vendors, including logs, bias audits, and disclosures about model updates. They should ask questions like: What did the model evaluate? Why did it rank one candidate higher over another? What can the hiring manager say if asked to justify that outcome?

The answers to those questions can lead to process changes. One of Greyhound’s European manufacturing clients, for instance, redesigned its hiring pipeline so that managers had to log a rationale at every decision point, even if AI had already created a shortlist. This helped improve the audit trail, catch errors, and taught the team to “treat AI as input, not verdict,” Gogia noted. And another client slowed its final screening process for senior hires because it couldn’t defend the decisions AI was influencing and realized the system wouldn’t be able to survive scrutiny.

“CIOs, CHROs, legal, risk — all need to co-own this now,” said Gogia. “That starts by restoring the human’s role as an accountable actor, not just a passive observer. The future of hiring tech is human with machine, governed from day one.”

This article originally appeared on Computerworld.


Read More from This Article: Workers challenge ‘hidden’ AI hiring tools in class action with major regulatory stakes.
Source: News

Category: NewsJanuary 23, 2026
Tags: art

Post navigation

PreviousPrevious post:“주니어 개발자를 지켜라” 텅 빈 미래를 방지하는 사람 우선 개발 전략NextNext post:Anthropic’s Claude AI gets a new constitution embedding safety and ethics

Related posts

샤오미, MIT 라이선스 ‘미모 V2.5’ 공개···장시간 실행 AI 에이전트 시장 겨냥
April 29, 2026
SAS makes AI governance the centerpiece of its agent strategy
April 29, 2026
The boardroom divide: Why cyber resilience is a cultural asset
April 28, 2026
Samsung Galaxy AI for business: Productivity meets security
April 28, 2026
Startup tackles knowledge graphs to improve AI accuracy
April 28, 2026
AI won’t fix your data problems. Data engineering will
April 28, 2026
Recent Posts
  • 샤오미, MIT 라이선스 ‘미모 V2.5’ 공개···장시간 실행 AI 에이전트 시장 겨냥
  • SAS makes AI governance the centerpiece of its agent strategy
  • The boardroom divide: Why cyber resilience is a cultural asset
  • Samsung Galaxy AI for business: Productivity meets security
  • Startup tackles knowledge graphs to improve AI accuracy
Recent Comments
    Archives
    • April 2026
    • March 2026
    • February 2026
    • January 2026
    • December 2025
    • November 2025
    • October 2025
    • September 2025
    • August 2025
    • July 2025
    • June 2025
    • May 2025
    • April 2025
    • March 2025
    • February 2025
    • January 2025
    • December 2024
    • November 2024
    • October 2024
    • September 2024
    • August 2024
    • July 2024
    • June 2024
    • May 2024
    • April 2024
    • March 2024
    • February 2024
    • January 2024
    • December 2023
    • November 2023
    • October 2023
    • September 2023
    • August 2023
    • July 2023
    • June 2023
    • May 2023
    • April 2023
    • March 2023
    • February 2023
    • January 2023
    • December 2022
    • November 2022
    • October 2022
    • September 2022
    • August 2022
    • July 2022
    • June 2022
    • May 2022
    • April 2022
    • March 2022
    • February 2022
    • January 2022
    • December 2021
    • November 2021
    • October 2021
    • September 2021
    • August 2021
    • July 2021
    • June 2021
    • May 2021
    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    Categories
    • News
    Meta
    • Log in
    • Entries feed
    • Comments feed
    • WordPress.org
    Tiatra LLC.

    Tiatra, LLC, based in the Washington, DC metropolitan area, proudly serves federal government agencies, organizations that work with the government and other commercial businesses and organizations. Tiatra specializes in a broad range of information technology (IT) development and management services incorporating solid engineering, attention to client needs, and meeting or exceeding any security parameters required. Our small yet innovative company is structured with a full complement of the necessary technical experts, working with hands-on management, to provide a high level of service and competitive pricing for your systems and engineering requirements.

    Find us on:

    FacebookTwitterLinkedin

    Submitclear

    Tiatra, LLC
    Copyright 2016. All rights reserved.