Artificial intelligence: No longer a future challenge
Artificial Intelligence (AI) often reminds us of Hollywood with films such as The Terminator and The Matrix presenting dystopic futures where humans and robots face off in epic battles.
However, AI and automated decision-making (ADM) are increasingly being adopted in organisations across Australia.
It is estimated digital technologies, including AI, will be worth $A315b to the Australian economy by 2028.
The Federal Government recently completed consultations regarding potentially regulating AI and ADM, with Governance Institute of Australia lodging a submission in response to the issues paper.
This article discusses the technologies, the potential they offer and the challenges they pose.
Defining AI and ADM
AI is defined as a fast-evolving group of technologies that often use data to improve prediction, optimisation and service delivery for outcomes including economic, social, strategic, military or environmental benefits. ADM is closely connected to AI. Automated systems range from traditional rules-based systems (for example a system which calculates a rate of payment in accordance with a formula) through to more specialised systems which use automated tools to predict and deliberate.
The growth of these technologies offers significant opportunities in productivity, economic growth and efficiency across a broad range of business sectors. It can also offer social and environmental benefits in sectors such as medicine, climate change and the environment, emergency response, logistics, finance, law, government services and defence. They are also likely to offer higher quality jobs, particularly for young people moving into the workforce, potentially with higher work satisfaction and less manual labour. However, the emergence of these new jobs could create challenges for industry around re-skilling the current workforce to adapt to the new technology.
The challenges for organisations and society
There are challenges connected to the emergence of AI and ADM for organisations, society and government.
Organisations: It is important that organisations use high quality data when developing these technologies as the use of inaccurate or poor-quality data can lead to bias and adverse outcomes for individuals. For example, academics in the United States found that AI models using data that reflected existing racial bias in healthcare delivery could create adverse outcomes for minority groups.
Society: A key issue for the development of AI and ADM is inclusion. It is important that the increasing use of AI and ADM does not exacerbate existing issues within Australian society such as the digital divide. The digital divide excludes Australian businesses and individuals who cannot access fast and reliable internet, for affordability, connectivity, speed and extent of access, bandwidth issues, digital literacy or any other reasons. The most recent Australian Digital Inclusion Index (ADII) report found that the number of Australians highly excluded from digital services is reducing but remains substantial. Technology organisations can help address this issue by involving diverse stakeholders in their risk assessment and design processes. Employing staff from diverse backgrounds would also be beneficial.
The challenges for government
There are many issues for governments to address in regulating these technologies including duplication, interoperability and stakeholder complexity.
Duplication: There is potential for duplication and overlap with interrelated policy areas such as privacy and data protection, cyber security and security of critical infrastructure. These areas of policy are important components of the governance and risk management frameworks of most, if not all, organisations in the modern Australian economy. In our submission we argued it is critical that any new legislation is harmonised in relation to these policy areas to ensure clarity and minimise regulatory burden. Reducing regulatory burden will allow organisations to innovate, develop new technologies and compete in global markets.
Interoperability: It is important that innovative technologies created in Australia can be developed, industrialised and exported to overseas markets. There are significant opportunities that will continue to arise with the development of these technologies, and it is critical that Australian organisations can compete internationally and become significant innovators in this field.
Stakeholder complexity: The development of AI and ADM can involve very complex relationships, which in turn create challenges for regulation. The research organisation the Lovelace Institute has found that “many AI products are not produced by a single organisation, but involve a complex web of procurement, outsourcing, re-use of data from a variety of sources, etc. This changes the question of who is in scope, and who should be accountable, for different parts of the AI lifecycle.”
A possible solution
In our submission we argued the current principles-based approach to regulating AI and ADM may be the best solution. This approach could be built on the existing Australia’s Ethics Principles. These principles outline the key outcomes organisations should consider in developing innovative AI including human centred AI, transparency, accountability, fairness, privacy, safety and contestability. These principles could be supported with further guidance and resources for organisations around conducting impact assessments prior, during and after AI and ADM development.
The continued rise of AI and ADM will create major opportunities globally. It is critical that Australia adopts an approach which facilitates technological advancement, but is also inclusive, human centric and focused not only on productivity, but also on providing benefits for society more broadly.