Australia’s AI plan criticized for emphasizing business over safety and rights

AI For Business


Electronic Frontiers Australia has criticized the federal government’s approach to regulating artificial intelligence, saying the next national AI plan prioritizes business opportunities over public safety and digital rights.

Legislative direction

Advocacy groups have expressed concern about the government’s decision to forego enacting dedicated, proactive AI legislation. Instead, Australia will use a combination of existing frameworks, including privacy, consumer protection, workplace and national security laws, to manage AI. According to the group, these laws primarily provide for ex-post facto or ex-post enforcement.

Electronic Frontiers Australia chairman John Pane compared the AI ​​regulation approach to previous privacy regulation efforts. “Many people don’t know that this Big Tech and corporate-friendly, light-touch regulatory approach was also used to implement the National Privacy Principles established under the Federal Privacy Act of 2000. These ‘light-touch’ privacy principles have failed miserably due to poor design, regulatory co-optation, and fear-mongering that puts Big Tech and corporate profits and profits and productivity above people and digital rights. And now it appears that history is repeating itself with this national AI plan,” Payne said.

Fragmentation of regulations

Under the government’s plan, AI-related obligations will be managed across existing legal frameworks, each of which will need to be updated to address new risks. EFA says this could mean legal fragmentation, as systems that were originally incompatible would be forced to work together. Furthermore, the burden of enforcement is likely to fall on regulators such as the Australian Information Commissioner’s Office, which already face resource constraints.

The group highlighted the challenges of relying on a reactive law enforcement model at a time when AI-powered harm can manifest quickly and at scale, sometimes immediately and without detection. “The new AI Safety Institute is starting to look like a lame duck, especially if it fails to prevent and mitigate high-risk and prohibited AI use cases that other countries have identified as important to human rights and then enshrined in law,” Payne said.

international model

Mr Payne argued Australia should take advantage of frameworks established overseas, particularly the European Union’s AI legislation, which sets out strict ex-ante or ex-ante requirements. “Australia needs strong EU-style ex-ante AI legislation, not a repeat of Australia’s disastrous ‘light-touch’ private sector privacy regime introduced in 2000. We also need to resist the significant geopolitical pressures being placed on Australia by the Trump administration to force sovereign states to ‘or else’ adopt US technology,” Mr Payne said.

Proposed protection

The EFA called for the introduction of mandatory risk assessments for high-stakes applications, a clear definition of prohibited AI use cases, and requirements focused on fairness, transparency and explainability. Payne said these should be backed by privacy, copyright and anti-disinformation protections.

“We need to take a stand and pass an AI bill that: Introduces mandatory risk assessments for AI applications in high-stakes sectors such as healthcare, law enforcement, and finance. Defines high-risk and prohibited use cases such as vulnerability exploitation, emotion inference, biometric classification, and subconscious manipulation. AI that includes fairness, transparency, and explainability. Articulating clear accountability mechanisms for technology developers and adopters, supported by strong privacy and other legal protections to protect individuals, protect copyrights, prevent algorithmic manipulation, and stem the flow of misinformation/disinformation.”

Concerns about public trust

EFA notes that the lack of proactive AI legislation has led to a decline in public trust in such technologies. “The absence of an up-front citizen- and society-centered legal framework for the development and deployment of AI not only puts individual rights at risk, but also further undermines the already very low level of public trust in AI technologies,” Payne said.

“EFA reiterates our call to the Australian government to create a human rights-based AI regulatory framework modeled on the European Union AI Act, which prioritizes privacy, safety and people’s rights over rubbery short-term economic profits, much of which will be exported overseas,” Mr Payne said.



Source link