Skip to content
Journal

CEO Memo December

Governance Institute of Australia CEO, Megan Motto FGIA FCG,

Ethical conduct and ethics are the bedrock of any functioning society. They ensure that decisions are made in the best interest of all stakeholders and fosters transparency, trust, and accountability. In Australia, the Governance Institute’s 2024 Ethics Index offers a snapshot of how ethical standards are perceived across various sectors.

This year, the Ethics Index revealed that Australians’ expectations around ethics have never been higher, rising from 74 in 2016 to 85 in 2023. While the desire for ethical behaviour has reached an all-time high, the gap between expectations and perceived performance remains troublingly wide.

We also produced a supplementary Ethics Index report on AI, sponsored by the National AI Centre, for deliver further, more qualitative insights into society’s perceptions about artificial intelligence.

As AI continues to penetrate every facet of society, the urgency for ethical AI governance has never been more pressing. In this year’s Ethics Index, 56% of Australians surveyed felt there is an ‘urgent ethical obligation’ for businesses to disclose the use of generative AI (Gen AI) in creating content. This is no small matter.

Transparency is critical for maintaining public trust, especially as AI systems increasingly play a role in shaping opinions, making predictions, and even influencing decisions that affect people’s lives. But what does this mean for businesses and regulators? How do we ensure AI is used ethically without stifling innovation or overwhelming industry players with ambiguity?

One place to start is by applying the new Voluntary AI Safety Standards to your organisation’s governance.

While nearly 40 per cent of respondents said they were uncertain about AI – with concerns about misuse, bias, and job displacement fuelling this apprehension – only 22 per cent use AI regularly or daily. More favourable sentiment increases with greater use. This presents a challenge for organisations keen to explore opportunities within the boundaries of the law.

The current landscape of AI governance is fraught with complexities, not the least of which is the lack of clarity around how existing laws, such as Australia’s Consumer Law (ACL), apply to AI. We welcome the Treasury’s consultation on this and have made some recommendations in our submission. The ACL is designed to protect consumers from misleading or deceptive conduct, the second highest ethical issue for Australians after corruption according to this year’s Ethics Index. The intricacies of AI-enabled goods and services — where the line between products and services blurs — pose significant challenges in defining what constitutes fair practices.

For example, how can businesses articulate the decision-making process behind an AI model which is, by nature, opaque and complex? How should they disclose the role AI plays in generating recommendations or content, especially when these models are constantly evolving?

These issues raise important questions about how we regulate AI to ensure businesses are meeting their obligations and consumers are aware of their rights. Misleading claims about AI capabilities, or “AI-washing,” are becoming a growing concern. Businesses that hastily adopt off-the-shelf AI systems without fully understanding their limitations may be tempted to overstate their potential, which risks undermining public trust in the technology. This points to a clear need for regulatory clarity and guidelines that help businesses navigate the challenges of disclosing AI involvement without violating consumer protections.

The most glaring gap in AI governance, however, is a lack of understanding. With small business making up 98 per cent of all businesses in Australia, the complexity of AI systems means that many businesses are hesitant to adopt AI technologies, and consumers are often wary of engaging with them.

A growing number of respondents (68 per cent) believe that tech companies have an “urgent ethical obligation” to ensure that AI is not used to deceive or manipulate. This concern is amplified by the increasing reliance on AI for tasks such as content creation, decision-making, and customer interaction. For instance, many Australians feel that the use of generative AI should be explicitly disclosed, especially when AI-generated content could potentially mislead or deceive consumers. Addressing this concern requires businesses to adopt ethical guidelines that emphasise transparency and accountability.

At its core, the conversation around AI ethics is about trust. If businesses and governments fail to provide clear, consistent guidelines for the ethical use of AI, the technology will remain shrouded in uncertainty. Only by addressing the complexities of AI in a thoughtful and transparent way can we ensure that AI serves the public good without compromising ethical standards. By aligning ethical standards with public expectations, we can ensure that AI, when properly regulated and thoughtfully implemented, becomes a force for good. The future of AI is not just about what the technology can do, but how we as a society choose to govern it.

Thank you to all of our members for your support and engagement throughout 2024. The Governance Institute team wishes you the very best for a safe and happy holiday season, and we look forward to seeing you again in 2025.

Acting for You, December 2024

Next article