Skip to content
News update

The AI chatbots are here, what does this mean for you?

 Robot hand reaching for human hand, unity with human and ai concept

AI chatbots using large language models (LLM) process vast amounts of information to predict writing patterns and human speech tailored to the user’s needs, intentions, and context.
You will probably be familiar with Open AI’s Chat GPT, the fastest-growing consumer application ever launched, with over 100 million active users and growing, surging it into notoriety.

The rise of Chat GPT is fascinating, to read about it, check out this article in the New York Times by Kevin Roose.

AI chatbot’s are here and are not going anywhere

We need to ignore the typical media hyperventilation about how AI will impact your life and career prospects and embrace the role AI chatbots will have as a fundamental piece of this decade’s technological infrastructure.

Organisations must learn to mitigate the risks and leverage the benefits.

It is true that the current iteration of LLM’s are not without significant risks to organisations and boards. They’re not designed to produce new insights, knowledge and opinions, where human experts already excel.

These models also tend to be riddled with biases that may not be immediately apparent to users. For example, they can purport to produce opinions as if they were verified facts, while leaving users none the wiser.

Boardrooms will have to have to learn to tackle major issues emerging from AI, from ethical questions, accountability and transparency issues, and liability implications.

The growing integration of technology in business has already seen boards and senior managers developing a range of new governance skills such as overseeing those running technology units directly, procuring technology services and outsourcing technology and services to third parties.

But more needs to be done.

Integrating AI and chatbots into your board activity should lead to quicker, more reliable audit information, assisted by automated big data AI analysis that will reduce costs, and improve the consistency and reliability of information.

Automated data processing can also help decision-making and prevent management from abusing its power. Algorithmic decision-making is the next step, but that’s a topic for another day.

Risks exist for externalising audits for businesses, from a lack of data control, increased routine data verification costs and potential delays. Integrating AI to complete such raw data verification processes gives directors better process control and will enable directors and shareholders access the data in real time without waiting for reports.

Given the embedded problems within corporate governance, the ability for an innovative approach to tackle such problems could be the solution we have long awaited.

Dr Joseph Lee & Mr Peter Underwood – AI in the Boardroom: Let the Law be in the Driving Seat

To properly manage integrating AI into the boardroom and in organisations more generally, AI governance frameworks are needed to learn, govern, monitor, and mature the AI adoption. Guardrails need to be implemented to ensure that AI works as intended. This is not just a job for your IT team or software engineers, rather, it should encompass your entire organisation.

Governments across the world are introducing new regulations and guidelines to prevent the harms caused by both intentional and unintentional misuse of AI, led by Singapore’s world-first AI Governance Testing Framework and Toolkit for companies. AI is expected to contribute US$15.7 trillion to global GDP by 2030 — more than the current economic output of India and China combined.

But given the speed at which AI is growing, can governments keep up?

Our government agencies need to understand and use these systems well to be able to regulate them effectively and manage the changes they will bring.

The federal government is currently pursuing the long overdue reform of the Privacy Act 1988 (Cth), conducting a public consultation to help the government make the Act ‘fit for purpose’ to ‘adequately protect Australians’ privacy in the digital age’.

It might be time to fold in a review of the guardrails in place for AI given our Artificial Intelligence (AI) Ethics Framework was published in November 2019.

Careful guidance on how we can use these new technologies and make them far more accessible is critical. The government must be more proactive in educating people on the limits, as well as the benefits of using large language models in their business.

One place to start is our Ethical AI Good Governance Guide but more work is needed to not let the AI revolution overwhelm regulation.

Fundraising reform a welcome step to free charities from red tape

Next article