Australia has outlined plans for new AI regulations, focusing on human oversight and transparency as the technology spreads rapidly across business and everyday life.
The country’s Industry and Science Minister, Ed Husic, on Thursday, introduced ten voluntary AI guidelines and launched a month-long consultation to assess whether these measures should be made mandatory in high-risk areas.
The guidelines include provisions to “enable human control or intervention” within AI systems to ensure meaningful oversight and to inform end-users “regarding AI-enabled decisions,” interactions with AI, and AI-generated content.
In a statement, the government said that consultations with the public and industry last year revealed strong support for tighter AI regulation. Businesses also called for clearer guidelines to confidently capitalize on the opportunities AI offers.
“The Tech Council estimates Generative AI alone could contribute $45 billion to $115 billion per year to the Australian economy by 2030,” the statement said. “That’s why earlier this year, the government appointed an AI expert group to guide our next steps.”
Acting on AI concerns
Global regulators have voiced concerns over the spread of misinformation and fake news driven by the rise of generative AI tools like Microsoft-backed OpenAI’s ChatGPT and Google’s Gemini.
In response to such concerns, the European Union introduced landmark AI legislation earlier this year, imposing strict transparency requirements on high-risk AI systems, exceeding the voluntary compliance measures adopted by some countries.
“The EU’s comprehensive AI framework, with its stringent transparency obligations, has set a global standard for AI regulation,” said Prabhu Ram, VP of industry research at Cybermedia Research. “Australia’s proposed AI guidelines, while voluntary, emphasize the importance of transparency and human oversight throughout the AI system lifecycle.”
However, compliance with these guidelines could pose a significant learning curve for enterprises due to the complexity of AI systems and the challenge of integrating them into existing business processes, Ram added.
Challenges for enterprises
While AI regulations are widely acknowledged as necessary, their implementation can be challenging, pointed out Faisal Kawoosa, chief analyst at Techarc.
For instance, human intervention, though crucial, cannot match the speed or efficiency of technology, creating two major hurdles.
“First, the time required for human review can be substantial,” Kawoosa said. “Companies promote AI as a tool for fast problem-solving and greater efficiency, but manual oversight can create friction, slowing processes down. Second, even with human involvement, it’s nearly impossible to review everything comprehensively.”
This could result in slower decision-making, particularly in sectors like finance and healthcare, where quick, accurate responses are essential. Scaling AI systems across large operations may become more challenging as human intervention introduces inefficiencies and increases costs.
Moreover, the risk of human error and bias complicates AI-driven workflows, while evolving regulatory demands add further complexity. These challenges could potentially impede the seamless, widespread adoption of AI tools in high-risk enterprise settings. In its statement, the government said that, in line with actions taken in other jurisdictions such as the EU, Japan, Singapore, and the US, the guidelines will be updated over time to reflect changes in best practices.
Read More from This Article: Australia pushes for AI rules, focusing on oversight and accountability
Source: News