Skip to content
Journal

Navigating the AI landscape

By: Daniel Popovski, Senior Policy & Advocacy Advisor, Governance Institute of Australia Ltd

Understanding the latest generation of AI

The latest generation of AI, Generative AI (GenAI), is the development of advanced Machine Learning (ML) using Large Language Models (LLM) and Multimodal Foundation Models (MfM). LLMs are black box AI systems that use deep learning on very large datasets to understand and generate new text.[1] MfMs are capable of receiving input and content in multiple modes and perform a range of general tasks such as text synthesis, image manipulation and audio generation.[2] Generative AI (GenAI) has broad practical applications with the ability to create new content such as text, images, audio, video, and code by learning from data patterns. Gen AI relies on existing data, processes it, and then generates data with similar characteristics.[3] GenAI has broad application across diverse industries such as healthcare, manufacturing, financial services, media and entertainment, advertising and software development. GenAI has the potential to drastically change the way content creation is approached, where the inventive step and creativity is shared between human users and AI. GenAI has the potential to enhance workplace productivity and efficiency from enhanced medical images and personalized medical treatments assisting medical professionals, to new drug discoveries, enhanced supply chain integrity and efficiency, smart maintenance solutions for machinery and equipment, and enhanced financial management via personlised investment strategies and personalised banking services for individuals and business customers.

AI Data Governance: The building blocks to trust and confidence

Data governance is the establishment of policies, procedures and standards to ensure the quality, security, and ethical use of data, which is crucial for accurate, fair and responsible AI operations, particularly with sensitive or personally identifiable information. Effective data governance frameworks help circumvent some of the issues surrounding AI such as hallucinations, bias, and discrimination. Companies may, knowingly or unknowingly, operate with their data stored by third party providers outside Australia, triggering a range of issues for security, protection, compliance, and vulnerability. AI governance frameworks that are holistic and engage all parts of the business require cross-functional integration, an organisational digital literacy uplift and engagement through the value chain. Poor quality data and a lack of data integrity and the systems that process the data may lead to inaccurate outputs and misinformation. Holistic data governance frameworks act to build trust in the technology both within organisations and across the stakeholders with which the organisation engages with.

A KPMG study found that 61 per cent of those surveyed were either ambivalent or unwilling to trust AI, despite a vast majority believing that AI will bring benefits to workplaces and the community.[4] Operationalising effective data governance arrangements and adjusting those governance frameworks as and when needed, is a necessary part of driving trust and confidence in AI. The AI Index Report 2024, found that robust and standardised evaluations for LLM responsibility are seriously lacking. It is found that leading developers, including OpenAI and Google primarily test their models against different responsible AI benchmarks complicating efforts to systematically compare the risks and limitations of top AI models.[5] Generative outputs of popular LLMs may contain copyrighted material, such as excerpts from newspapers or scenes from movies triggering concerns for copyright violations. Poor data governance has led to an increased number of AI incidents. In 2023, 123 incidents were reported on the AI Incident Database (AIID), a 32.3 per cent increase from 2022, and a twentyfold increase since 2013.

AI Governance: The 4-step FATE approach

The aim of the FATE (Fairness, Accountability, Transparency and Explainability or Ethics) principles is to support decision making, human-AI co-learning, explainable AI and fair, bias and discrimination free data that can be confidently and securely leveraged to inform customers, suppliers, employees, and decision makers.[6] GenAI outputs, predictions and advice need to be fair, understandable, trustworthy, controllable, and secure.

Fairness is best realised through bias and discrimination management via the effective identification, mitigation, and prevention of such outcomes. Those engaging and interacting with AI require transparent and clearly understandable information and explanations about how decisions and outputs were generated by AI systems. Transparency of decision-making lifts the veil off black box predictions to provide deployers with confidence in its accuracy and effectiveness and provides customers and suppliers with the confidence that information is used for a proper, genuine, and ethical purpose.

There is a growing demand for accountability in AI. The question of how to create accountable AI systems is an important element of public and private governance.[7] There are a number of tools for increasing AI accountability.[8] However, the most of effective of those may be in explanations. Explanations expose information about specific individual decisions without necessarily exposing the precise mechanics of the decision-making process. Explanations can be used to prevent or rectify errors and increase trust. Explanations can also be used to ascertain whether certain criteria were used appropriately or inappropriately in case of a dispute.[9]

FATE principles are AI domain-agnostic and can be used across a wide range of GenAI uses and practices. Microsoft operates on FATE principles with the aim of facilitating computational techniques that are both innovative and responsible while prioritising fairness, accountability, transparency and ethics as they relate to AI, ML and NLP.[10] The principles of Microsoft AI cover six dimensions of responsible AI including fairness, reliability and safety, privacy and security, inclusiveness, transparency and accountability. These principles generally guide AI developers for more inclusive and accessible AI that drives confidence and trust in the technology.

Voluntary technical standards

There are several voluntary technical standards that are continuing to attract broad uptake across the business community. International Standards and Risk Management Frameworks developed by the International Standards Organisation (ISO) and others aim to provide organisations including directors and officers with further guidance on AI development and deployment. These technical standards are regularly reviewed and updated to monitor and test their effectiveness, making them an appropriate tool for fast-paced evolving technology such as AI.

Framework for AI systems using ML (ISO/IEC 23053:2022) – establishes an AI and ML framework for describing a generic AI system using ML technology. The framework describes the system components and their functions in the AI ecosystem. It is applicable to organisations of all types and sizes, including private and public companies, government entities, and not-for-profit organisations that are implementing or using AIsystems.[11]

AI – Guidance on risk management (ISO/IEC 23894:2023) – provides guidance on how organisations that develop, produce, deploy, or use products, systems and services that use AI can manage risk specifically related to AI. The guidance also aims to assist organisations to integrate risk management into their AI-related activities and functions. It describes processes for the effective implementation and integration of AI risk management.[12]

AI – Management systems (ISO/IEC 42001:2023) – an international standard that specifies requirements for establishing, implementing, maintaining, and continually improving an Artificial Intelligence Management System (AIMS) within organisations. It is designed for entities providing or utilizing AI-based products or services, ensuring responsible development and use of AI systems. It addresses the unique challenges AI poses including ethical considerations, transparency, and continuous learning. It sets a structured way for organisations to manage risks and opportunities associated with AI, balancing innovation with governance.[13]

AI Risk Management Framework (AI RMF 1.0) (NIST) – the aim of the AI RMF is to offer are source to organisations designing, developing, deploying, or using AI systems to help manage the many risks of AI and promote trustworthy and responsible development and use of AI systems. The framework is voluntary, rights-preserving, non-sector specific, and use-case agnostic, providing flexibility to organisations of all sizes and in all sectors. The AI RMF aims to equip organisations and individuals with approaches that increase the trustworthiness of AI systems, and to help foster the responsible design, development, deployment, and use of AU systems over time.[14]

Recent developments in global AI Policy, Governance and investment

The US and the EU have advanced landmark AI policy action in the last 12 months. The EU enacted the EU AI Act, a landmark piece of legislation earlier this year. In the US, President Biden signed an Executive Order on AI, the most notable AI policy initiative in the US in January this year. The number of AI regulations in the US has sharply increased, with 25 AI-related regulations in 2023, up from just one in 2016. In Australia, a new Parliamentary Senate Committee on AI was established earlier this year to inquire and report on the opportunities and risks of AI. Private investment in AI has increased 8-fold since 2022, to reach $25.2 billion, led by major players, OpenAI, Anthropic, Hugging Face and Inflection.

 

[1] https://www.techtarget.com/whatis/feature/12‐of‐the‐best‐large‐language‐models

[2] https://www.adalovelaceinstitute.org/resource/foundation‐modelsexplainer

[3] https://www.sas.com/th_th/insights/analytics/generativeai.

[4] https://kpmg.com/au/en/home/insights/2023/02/trust‐in‐ai‐global‐insights‐2023.html

[5] HAI_2024_AI-Index-Report.pdf (stanford.edu)

[6] https://ceur‐ws.org/Vol‐2846/paper35.pdf

[7] Doshi-Velez, F., Kortz, M., Budish, R., Bavitz, C., Gershman, S., O’Brien, D., Scott, K., Schieber, S., Waldo, J., Weinberger, D. and Weller, A., 2017. Accountability of AI under the law: The role of explanation. arXiv preprint arXiv:1711.01134.

[8] Tools for Trustworthy AI – OECD.AI

[9] Doshi-Velez, F., Kortz, M., Budish, R., Bavitz, C., Gershman, S., O’Brien, D., Scott, K., Schieber, S., Waldo, J., Weinberger, D. and Weller, A., 2017. Accountability of AI under the law: The role of explanation. arXiv preprint arXiv:1711.01134.

[10] https://www.microsoft.com/en‐us/research/theme/fate/overview/

[11] https://standards.iteh.ai/catalog/standards/iso/834bec3e‐1b4c‐4ebe‐bf84‐71d3a6c31715/iso‐iec‐23053‐

2022#:~:text=The%20guidance%20also%20aims%20to,integration%20of%20AI%20risk%20management

[12] https://www.iso.org/standard/77304.html

[13] https://www.iso.org/standard/81230.html

[14] https://nvlpubs.nist.gov/nistpubs/ai/nist.ai.100‐1.pdf

Three essential ways to help your team be less fragile

Next article