Governance Institute advocates for a risk-based approach to AI regulation in response to Federal Government discussion paper
You don’t have to look very far to see how AI is having an impact on our daily lives. You have probably already used a generative AI-related application like Chat GPT or functions within Microsoft Word. But its use is undoubtedly raising more questions than answers around the ethics, trust and confidence in using AI for business purposes.
The Governance Institute believes it’s fundamental that Australia takes a serious look at AI and the risks and benefits associated with it. The Federal Government released the Safe and Responsible Artificial Intelligence (AI) in Australia discussion paper in June, seeking systemwide feedback on steps Australia can take to mitigate the potential risks of AI.
Governance Institute’s CEO, Megan Motto FGIA, has welcomed the discussion paper as an important step to understanding current regulatory initiatives underway and what work needs to be done.
‘Australia is at an important crossroads.’ Ms Motto said.
‘We have an opportunity to build global competitive advantage by increasing investment in AI while leading the move towards responsible creation and usage of AI, or by pursuing a light touch system which does not have the benefits safe and responsible AI will produce.’
‘We welcome the release of this paper and support the aims of the Government to mitigate any risks of AI. We want, however, to ensure regulation in this space doesn’t curb the productivity opportunities that AI can bring,’ Ms Motto said.
Governance Institute has a strong interest and involvement in digital technology and cyber security policy. In recent years our Risk and Technology policy committee has published Good Governance Guides on cloud services, digital transformation, digital trust, technology strategy, technology governance, cybersecurity, data as an asset, and ethical use of AI.
In our submission to the Department of Industry, Science and Resources, Governance Institute responded to a number of questions set out in the discussion paper that are of interest and concern to our members.
One of the recommendations outlined in our submission was the establishment of a dedicated and independent AI Safety Commissioner and Agency, as outlined in the Australian Human Rights Commission’s Human Rights and Technology Report.
Such a role would support regulators, policymakers, governments, and businesses in applying laws and other standards for AI-informed decision-making. Likewise, the role would provide a centralised regulatory body responsible for developing and enforcing AI policy and legislation.
Another recommendation advocated in our submission was for a risk-based approach to AI regulation as the best way to ensure safe usage and community trust in this technology.
Governance Institute members say they prefer a ‘middle ground’ approach, one that is neither technology-neutral nor technology-specific, prescriptive regulation.
‘We’ve seen a similar approach adopted in the EU via their AI Act. This approach allows the Government to limit potential risks or harms associated with AI, as well as removing regulation gaps that AI can exploit,’ Ms Motto says.
It’s fundamental to such an approach that the Government must help build public trust in AI, as it will continue to embed itself in most aspects of life going forward.
To read our full set of recommendations and concerns, download our submission titled Safe and Responsible Artificial Intelligence (AI) in Australia.