Artificial intelligence — Governance issues facing assurance providers and boards
Mixed news for humans: Artificial Intelligence can be extremely helpful to internal auditors and boards in providing useful information to work off, but it doesn’t provide miracles of analysis.
This was one of many ideas to come out of the panel webcast discussion recorded in late March by the Institute of Internal Auditors in conjunction with the Governance Institute of Australia.
CSIRO Futurist and AI specialist, Rob Hanson, EY Director of Financial Services, Charlie Puddicombe, and barrister and law lecturer Dr Phillipa Ryan collectively dived into the topic for our benefit. They came up with a relatively equal mix of approbation, and warnings, about how AI-related technologies can do a lot of the heavy lifting for auditors and their boards, but only if managed well.
Rob believes that while some directors are “uncomfortable” with elements of this new technology, he concedes that a better word might be “terrified”, and argues that “AI”, in some shape or form, has been around since the 1950s.
He noted that CSIRO’s recently created Data61 division, where he works, is running training courses to explain the benefits of AI across a range of industries.
Far from being asked simple questions, “I get asked questions that are too technical and too advanced,” he said, suggesting there’s a danger of misunderstanding the basics. “Not in public, but behind closed doors,’’ he added.
Charlie focused on the advantages for internal auditors, whom he advises, noting that it’s the correct balance of human and artificial intelligence that gets the best results.
He noted that concepts such as machine learning, natural language processing and robotics can easily be harnessed to speed up the collection of data by both external and internal auditors.
“AI can augment the human,” he noted, adding that it was particularly useful in taking over repetitive tasks. It can also help those auditors by validating information, he said, and converting data into useful information on which decisions can be based.
Meanwhile, boards can be much more quickly shown important information requiring decisions, he said. But he noted it often required human input to actually analyse the information.
Another hot area in the discussion was biases affecting the value of information, starting with unconscious bias during the coding process.
It was noted that most coders were young males. Charlie said there were more than 160 separate biases that had been identified as possibly influencing decision making by humans.
Dr Ryan (Pip) said that boards need not be full of AI experts. They should be considered aggregators of information rather than aiming to have every member be multi-skilled, she said. “And if you want to eradicate these biases, have a really mixed board.”
She said she knew she had recently been appointed to two boards because of her knowledge of AI, the ‘automation of trust’, block-chain technology and the accountability of algorithms.
She also pointed out there is a huge range of use-cases for AI, running from machine algorithms that teach themselves, through to car windscreen wipers that use sensors to turn themselves on when it is raining.
The main point is to make sure you have the right domain in which AI can work most effectively for your specific use case, bearing in mind that the technology is very immature, she said.
”If I had an autonomous vehicle, for instance, I wouldn’t operate it outside a school where there are five year olds because they behave unpredictably,” she said.
“But it would be fine around the curtain of an airport where you have a lot of control.”
“In a board situation, don’t just buy the technology and assume it’s going to fit your business. Having bought it, know that that’s not the end of the discussion, it’s just the beginning.”
Watch the free webcast
Register for our free webinar to better grasp the governance issues associated with AI.