These warnings came from several Australian AI companies at the Senate Artificial Intelligence Deployment Inquiry on Wednesday, despite support for reforming privacy laws and limiting high-risk AI uses.
Their statement came the day after representatives from the publishing, news, TV and film industries told the investigative committee that Australian content had been used by US companies to create AI models without consent or compensation.
Australian companies say tough rules mean they suffer unfairly in global competition. (AP photo)
Trellis Data CEO Michael Gately said the government would face significant challenges if it wanted to impose regulations on the use and development of AI models in Australia.
While guidelines are necessary, strict rules on remuneration for content creators and transparency in data use could discourage Australian companies and be ignored by overseas developers, he said.
“Personally, I have always wanted people to be compensated for their work under copyright law.
“However, I think this would be difficult to implement and would probably have an unfair impact on Australian businesses in their international competition,” Mr Gately said.
“The global large-scale language models are already making massive inroads with Australian artists.
“To a large extent, the horses have already run away.”
Hearst chief technology officer Liam King said onerous rules on data transparency “could put Australian organisations at a disadvantage on the global stage”.
Nuvent CEO David Hohnke agreed, telling investigators that any AI regulation introduced in Australia should work in tandem with European and US regulations to avoid stifling local innovation.
“We don't want to be tied down by the fact that Australians have to look at all the data but the rest of the world doesn't have to,” he said.
“If we do this alone, we could harm ourselves and people will say, 'So what, I'll just use ChatGPT to upload documents and violate the company's requirements.'”
But Atlassian's head of global public policy, David Masters, said while strict AI regulation could slow innovation in Australia, there was still room to set standards on its use and introduce legal changes.
Mr Masters said tech companies wanted Australian standards to define high-risk AI uses and privacy laws to be changed to align with European rules.
“We have been very clear that we want to see privacy reform progress in Australia and standards raised,” he said.
“We don't want to stifle innovation in relatively low-risk scenarios that present opportunities for Australia, but it's important that we have the right guardrails and appropriate levels of security in higher-risk areas.”
Anna Jaffe, director of regulatory and ethics at Atlassian, also told the committee that standards for AI labeling should go beyond just watermarking content and explain how the technology was used.
“You can require disclosure that AI was used in the design process, but without explaining what that means, transparency in itself doesn't mean much,” she said.
A Senate investigative committee is due to release its findings on the impact of AI in September.