Australia Asks Unannounced Talks Should Ban ‘High-Risk’ AI

AI News


The Australian government has suddenly announced an eight-week consultation to consider whether “high-risk” artificial intelligence tools should be banned.

Other regions, including the United States, the European Union and China, have also taken steps in recent months to understand and potentially mitigate the risks associated with rapid AI development.

On 1 June, Minister for Industry and Science Ed Hoosik announced the release of two papers: a discussion paper on ‘Safe and responsible AI in Australia’ and a report on generative AI by the National Science and Technology Council (NSTC). .

These documents were submitted at the same time as the consultations, which will run until 26 July.

The government is asking for feedback on how it supports the “safe and responsible use of AI” and whether it should take a voluntary approach, such as an ethical framework if specific regulation is required, or both. We are debating whether we should take a combined approach.

A map of potential AI governance options ranging from voluntary to regulatory. Source: DISR

The question of this consultation is a direct one: “Should high-risk AI applications and technologies be banned entirely?” And what criteria should be used to identify AI tools that should be banned?

A draft AI model risk matrix is ​​included as feedback in a comprehensive discussion paper. To name just one example, AI in self-driving cars was classified as “high risk” and generative AI tools used for purposes such as creating medical patient records were considered “medium risk.”

The paper discusses not only “positive” uses of AI in the medical, engineering, and legal industries, but also “harmful” uses such as deepfake tools, fake news creation, and examples of AI bots encouraging self-harm. Use is also emphasized.

Bias in AI models and “hallucinations,” which are meaningless or false information generated by AI, were also raised as issues.

Related: Microsoft CSO Says AI Will Help Humanity Prosper, Co-Signs Doomsday Papers Anyway

The discussion paper argues that AI adoption is “relatively low” in the country due to “low levels of public trust”. He also pointed to AI regulations in other jurisdictions and the temporary ban on ChatGPT in Italy.

Meanwhile, the NTSC report said Australia has some favorable AI capabilities in robotics and computer vision, but it does have “core foundational competencies”. [large language models] And related areas are relatively weak,” he added.

“The concentration of generative AI resources within a small number of large multinational, primarily US-based technology companies provides potential. [sic] It’s a risk for Australia. ”

The report further discusses global AI regulations, citing examples of generative AI models, which “are likely to impact everything from banking and finance to public services, education and creative industries.” Says.

AI eye: 25,000 traders bet on ChatGPT stock selection, AI is bad at throwing dice and more



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *