This opinion piece by Human Rights Commissioner Lorraine Finlay appeared in the Mandarin on August 18th, 2025.
One question is looming as Australia prepares for a productivity roundtable. How do we ensure that technologies that drive future productivity support the value that defines us?
Artificial intelligence (AI) is often touted as a solution to Australia's declining productivity. However, without the right guardrail, even if you solve the old problem, there is a risk of new issues occurring. This roundtable is an opportunity to go beyond theory and commit to practical reforms that drive sustainable and inclusive growth.
AI is already shaping recruitment, credit scoring, policing and welfare offerings – a region that directly affects our lives. If these systems are biased or poorly designed, the consequences can be severe. It's discrimination, exclusion, and erosion of public trust.
With a clear understanding of the risks of AI, economic growth must be pursued, especially in shocking environments where human rights are the most vulnerable.
The current proposals for forced guardrails for high-risk AI are proportional to the target. It supports innovation by ensuring that appropriate safeguards are in place where there is the greatest potential for harm.
The Australian Human Rights Commission has consistently called for a legislative framework that sets enforceable standards for AI. It supports Australia's AI law to establish essential guardrails for high-risk applications, pose unacceptable risks and prohibit use that requires human surveillance.
Such a framework provides clarity and consistency for businesses and regulators. This allows for responsible innovation that aligns with Australia's values. Without proper protection, we risk undermining the extremely productivity gains we seek.
For businesses, this is not just compliance – it is a strategic obligation. Consumers, investors and employees are increasingly looking forward to ethical and transparent practices. Trust is a competitive advantage.
Some well-known cases show what happens when an AI-powered product is launched without proper protection. Microsoft's Tay Chatbot was filmed offline within 24 hours of starting to generate offensive content. Google's photography algorithm misinterpreted people of color in a deeply inappropriate way. Amazon's recruitment tools have been abolished to avoid having female candidates.
These failures that did not consider human rights led to public rebellion and loss of trust. It undermined reputation and diverted resources from innovation.
In contrast, companies that lead with integrity and foresight can shape a future of responsible innovation. Human rights-centered regulations create the stable environment needed to build trust, reduce risk and invest with confidence. While voluntary actions by the industry are important, it cannot replace enforceable rules that ensure consistency, accountability and protection. Especially when things aren't going well.
New laws should complement existing laws, but the need for regulatory gap analysis should not be an excuse for inaction. The pace of reforms, especially in areas such as privacy laws, has slowed uncompromisingly. By the time we reach consensus, technology has already moved. Instead of waiting for the “perfect” solution, you need to act now.
Some suggest that the US will “tear it” with AI, but reality is even more subtle. Federal movements are limited, but many US states are stepping up.
Colorado requires businesses to assess and mitigate the risks of high-risk AI systems. In June, Texas passed a comprehensive law requiring transparency and consumer protection, including a ban on operating technology. The removal of the freeze on state-level AI regulations from Trump's “big beautiful bill” reflects a bipartisan perception that AI does not remain unregulated.
Australia's AI Act will be a practical and targeted response to the growing understanding that high-risk AI requires specific safeguards.
Productivity Roundtable is considering how to unlock Australia's next wave of economic growth, so what growth is needed? Fast but fragile or resilient, inclusive, and built on trust?
Embedding human rights into AI governance is not a constraint – it is a prerequisite for long-term success. The AI Act ensures that technology serves not only private but public interest, but also public interest, and ensures that all Australians benefit from innovation without sacrificing their rights.
We strive to build an innovative, inclusive, rights-focused digital Australia. Because the future of technology is not just about what we can do.
Lorraine Finlay is Australia's Human Rights Commissioner
/Public release. This material of the Organization of Origin/Author is a point-in-time nature and may be edited for clarity, style and length. Mirage.news does not take any institutional position or aspect, and all views, positions and conclusions expressed here are the views of the authors alone.
