Down arrow button icon



TL;DR: Job seekers are suing an AI recruiting tool called Eightfold for allegedly compile secret reports which help employers select candidates. Why is this illegal? The suit claims that’s the same reason credit rating agencies have to tell you why they altered your score. If courts accept this logic, it could begin to reshape the black box world of AI recruitment.

What happened: Like many people who have been playing the job search numbers game lately, the plaintiffs were fed up with applications that seemed to sink into the void. They filed a class action lawsuit against Eightfold, which is used by major companies like Microsoft and PayPal to vet potential hires. The lawsuit claims Eightfold violated the Fair Credit Reporting Act and a similar California consumer protection law by not allowing applicants to review information about them and correct the record if necessary.

“Eightfold’s technology lurks in the background of job applications,” the lawsuit claims, “collecting personal data, such as social media profiles, location data, internet and device activity, cookies, and other tracking.”

Eightfold disputes this: the tool “operates on data intentionally shared by candidates or provided by our customers. We do not scrape social networks and the like,” spokesperson Kurt Foeller told us. “Eightfold believes the allegations are without merit.”

What is not in dispute is that Eightfold uses AI to produce a score between zero and five, ranking a candidate’s suitability for a given role.

Why it’s important: Companies now use a range of AI tools behind the scenes to source and evaluate candidates. Candidates play the game too, using their own AI tools to find jobs and create applications. This is AI all the way.

“We are at a point where AI-based recruiting tools are being adopted very quickly, often faster than companies put in place the compliance, auditing and governance structures necessary to use them responsibly,” the attorneys handling the case, Jenny R. Yang and Christopher M. McNerney, partners at Outten & Golden LLP, told us in an email. “This creates a real risk, not only of inaccurate decisions, but also of hidden discrimination. »

Some states, including New York City, have laws governing these tools, largely focused on their potential for bias and discrimination. But AI decision-making still happens largely without the knowledge of job seekers.

This isn’t the first time the Fair Credit Reporting Act has been used to challenge big data-based recruiting systems, according to Pauline Kim, an employment law professor at the University of Washington Law School, but it is new that one of these cases focuses on AI.

What this means for you: If the lawsuit is successful — which could take years — AI recruiting tools could be more upfront about the data they collect and work harder to ensure its accuracy, Kim said. But the 55-year-old law the suit relies on also may not fully account for modern usage.

The real meaning, according to Kim, is that companies that rely on these tools should be more transparent about their use. “Since the law was written in an earlier era, even if the courts enforce it, it will provide only limited transparency – probably not enough to ensure the fairness of these systems. » -PK

This report was originally published by Technological brew.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *