Human resources experts say there's a new frontier for diversity – Human quotas

AI For Business


Diversity and inclusion quotas based on gender and race were quietly repealed in 2025. But as AI continues to reimagine, new types of assignments may emerge, HR experts say.

What is the layer that aims to provide equity?Humanity.

At the HR Symposium in London this Tuesday, analysts from research giant Gartner predicted that by 2032, at least 30% of the world's largest economies will require “certified staffing that mandates a minimum level of human involvement on the job.”

Ania Krasniewska, group vice president at Gartner, told Business Insider in a conference interview that the global research firm's forecast is based on several Gartner analyses.

The goal will be to ensure that humans continue to play a meaningful role in production, decision-making, and creative processes while AI plays a larger role in enterprise workflows and outcomes.

“This kind of change will be legislatively driven, not institutionally driven,” Krasniewska said. “Once these rules come into effect, organizations will need a clear process for redeploying employees and a way to demonstrate that they are doing so.”

In August, the High Court of Australia ruled that the Fair Work Commission can investigate whether an employer was able to restructure operations and rehire workers before declaring them redundant.

The decision, which stems from cases of laying off employees while outsourcing tasks to contractors, will also impact companies' decisions about redeploying the human workforce along with AI.

“We expect to see more such legislation announced in the coming years,” Krasniewska said.

As more jobs are created by AI, business leaders must also consider how to handle liability for errors that may result from AI-created jobs.

Using the example of medical images, Krasniewska said, “If an AI reads it and pursues a treatment plan, and it turns out that the reading was wrong, who will take responsibility for that?”

A common solution is to “keep humans in the loop.” For example, EU AI law requires “meaningful” levels of human oversight of high-risk AI systems, ensuring that humans can intervene to correct AI outputs that endanger health, safety, or fundamental rights.

It's not a surefire solution.

Deloitte, one of the world's largest consulting and accounting firms, agreed this week to make a partial refund to the Australian government after it produced a report that contained errors, including academic references to non-existent people and fabricated citations from federal court decisions. The Big Four said they used AI to prepare their reports.

Krasniewska said companies need to consider ways to track where the human interaction component is and use citations and watermarks on information to make it clear whether it includes an AI component.

Legislation and public pressure are likely to result in some form of disclosure requirement, she added.

“Organizations don't necessarily think that far ahead about what we should do, but I think there's a very human reality of feeling the need to disclose how we got from point A to point B,” Krasniewska said.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *