Skip to content
Journal

AI in the boardroom: Could robots soon be running companies?

  • Australian boards are likely to look to AI and machine learning to improve the quality of their decision-making.
  • Directors who wish to make use of AI should do so as an aide to their own decision-making, rather than as a substitute for making an independent assessment.
  • Directors need to be aware of the legal risks associated with using AI and how to properly manage them.

Click here to listen to the 10 key questions for directors to ask in relation to AI.

Artificial intelligence (AI) and automation more broadly continue to be identified as the next frontier in productivity enhancement and growth. Last year, McKinsey estimated AI could potentially increase economic outputs by $13 trillion by 2030, and add to global GDP by approximately 1.2 per cent.1

Consistent with the trend, it is likely that Australian boards will increasingly look to AI and machine learning to improve the quality of their decision-making. But can an algorithm run a company instead of a director?

The term ‘AI’ is often used synonymously with machine learning, but this is not strictly correct.

True AI exhibits features of human-like intelligence and the ability to use human-like judgment in decision-making. This is in contrast to machine learning tools that conduct statistical analysis of data sets to identify patterns, but which are not exercising ‘judgment’ to reach conclusions. Despite these differences, both AI and machine learning tools rely on large, high-quality data sets to improve, and both will inevitably make mistakes along the way.

Predictions of robots in the boardroom are not far-fetched. In late 2016, OMX-listed Tieto Corporation announced that it had appointed an AI platform known as Alicia T to be a member of its executive leadership team. Alicia T is equipped with a conversational interface that allows its human counterparts to ask it questions. The platform even has a vote on some management decisions.

More recently, Hong Kong venture capitalist Deep Knowledge Ventures appointed an algorithm known as Vital to help the fund make its investment decisions. These appointments reflect a growing acceptance that machine learning may be capable of making better business decisions than human beings.

Where the AI is wrong, this can result in wrong decisions or even decisions that breach the law.

Can an algorithm run a company instead of a director?

For the time being, the answer to this question is no. A robot can’t be a director under Australian law. By definition, a director must be a ‘person’. We do expect, however, that directors will increasingly seek to use machine learning and AI to assist them in their own decision-making and to rely on decisions taken elsewhere within the organisation that are the product of the application of AI. In this context, it is critical that directors are aware of the legal risks associated with using AI and how to properly manage them.

AI is not foolproof and directors must expect that some decisions made by AI will be wrong. This may be for numerous reasons, including that:

  • the algorithm is incorrect or poorly understood
  • the data set is inappropriate or is contaminated by bias or
  • the decision-making was impacted by coding error or malfeasance (eg hacking).

Where the AI is wrong, this can result in wrong decisions or even decisions that breach the law. For example, in the human resources context, the use of AI tools in conjunction with data about previous successful employees to predict which candidates are most likely to be successful in the future may simply reinforce existing biases or discrimination in hiring practices. The issue for directors is whether they might be exposed to a breach of their duty to exercise reasonable care and diligence as a result of the failure of the AI.

AI and safe harbours

There are three important safe harbours available under the Corporations Act 2001 (Corporations Act) to directors who are accused of breaching their duty to exercise reasonable care and diligence. These are:

  • the business judgment rule in s 180(2)
  • the right of reliance in s 189 and
  • the right to delegate in s 190.

Australian courts have not yet had an opportunity to consider how those safe harbours might respond to a case where an impugned decision was made by or with the assistance of AI. However, a first-principles assessment suggests that the safe harbours might not be available if directors were to simply adopt decisions made by AI without exercising independent judgment.

1. The business judgment rule

Under s 180(2) of the Corporations Act, a director who makes a business judgment is taken to have discharged his or her duty of care and diligence if they:

  • make the judgment in good faith and for a proper purpose
  • do not have a material personal interest in the subject matter of the judgment
  • inform themselves about the subject matter of the judgment to the extent they reasonably believe to be appropriate and
  • rationally believe that the judgment is in the best interests of the company. This will be the case unless the belief is one that no reasonable person in their position would hold.

There would seem to be two potential obstacles to a director who relies on AI to make a decision, taking advantage of the business judgment rule (assuming that items 1,2 and 4 are made out).

The first is whether the director has made a ‘business judgment’ at all. Under s 180(3), a business judgment means any decision to ‘take or not take action in respect of a matter relevant to the business operations of the corporation’. In ASIC v Rich, Austin J noted that the decision must be ‘consciously made’ and that the director must have ‘turned his or her mind to the matter’. Austin J’s language seems to attach to the impugned decision itself rather than to the preceding decision to make that decision using AI. It would appear, therefore, that a director who wholly hands over decision-making to AI does not make a business judgment to which the defence can attach.

This point is further underscored by the requirement in s 180(2)(c) that the director must have informed themselves about the subject matter of the judgment ‘to the extent they reasonably believe to be appropriate’. Again, this requirement appears to attach to the impugned decision and is not satisfied by a director who determines that a class of decision-making can be best left to AI. If any of those decisions turn out to be incorrect, the director can hardly say that they have informed themselves about the subject matter of that decision in the manner required by s 180(2)(c).

2. The right to delegate

Section 198D(1)(d) of the Corporations Act entitles directors to delegate any of their powers to another ‘person’. Again, the reference to a ‘person’ here precludes delegation to a machine. The directors may, however, choose to delegate to a person (such as an employee) who they know will rely on the use of AI for the purposes of discharging that power.

Under s 190(2)(b), a director is not responsible for the actions of the delegate where the director believed on reasonable grounds, in good faith and after making proper enquiry if the circumstances indicated the need for enquiry, that the delegate was reliable and competent in relation to the power delegated. Therefore, the question is what enquiry should directors undertake before they delegate any of their decision-making power to a person who will use AI in exercising that power?

We would suggest that the proposed use of AI constitutes circumstances that ‘indicate the need for enquiry’ as to the reliability and competence of the decision-maker within the meaning of s 190(2)(b).

Given that the decision will be made or informed by AI, this likely translates into an obligation on directors to satisfy themselves as to the reliability and competence of the AI itself. A director who fails to interrogate the algorithm and or data set, or to question the appropriateness of the particular platform being deployed to the duties being delegated, risks the court finding that the director has not satisfied himself or herself as to the reliability and competence of the delegate. In that case, the director will be liable for any failure of the delegate as if it were the director’s own breach of duty see s 180(1).

3. The right of reliance

In certain circumstances, a director is entitled to rely on information or advice taken from an employee, professional adviser, expert or another director.

Section 189 of the Corporations Act provides that a director’s reliance on such information or advice will be deemed to be reasonable for the purposes of discharging that director’s duty of care and diligence if the reliance was made in good faith and after making an independent assessment of the information or advice, having regard to the director’s knowledge of the corporation and the complexity of the structure and operations of the corporation.

There is no apparent reason why a director would not be entitled to rely on information or advice that has been generated by the relevant adviser with the benefit of AI. What is not clear, however, is whether s 189 allows a director to rely directly on the output of AI itself. This turns on whether the court would be willing to regard the AI tool as a ‘professional adviser or expert’ within the meaning of s 189.

While it is unlikely that Parliament intended those words to include a machine, the wording does not necessarily preclude such a finding. Demonstrating that the AI is expert in relation to a particular subject, however, would require strong evidence as to the workings of the automated decision-making and its application to the subject matter of the decision. The safer and more likely course is therefore for directors to rely on the advice of an employee or expert that has used AI in forming the advice.

Where a director relies on the advice of an employee which is generated with the help of AI, the director must believe on reasonable grounds that the employee is ‘reliable and competent in relation to the matters concerned’. In the case of a professional adviser, the director must believe on reasonable grounds that the ‘matter is within the person’s professional or expert competence’. This creates a potential disconnect where machine learning is used to reach a decision, as the person who is expert in the application of AI may not be expert in the subject matter to which the AI is being deployed. The language of the Corporations Act seems to require expertise in relation to the subject matter rather than expertise in the way that decisions are made. Following the logic suggests that the reliance defence is only available where the adviser has taken the output of the AI and applied their subject matter expertise to the outcome before providing advice to the board.

The final requirement — that the director must have made an ‘independent assessment’ of the information or advice on which he or she relies — is perhaps the most significant.

This goes beyond the equivalent requirement in the business judgment rule (which requires that the director be informed about the subject matter of the judgment ‘to the extent they reasonably believe to be appropriate’) or the delegation right (which requires the director to make proper enquiry that the delegate was reliable and competent), in that it requires the director to actively interrogate the advice itself. The degree of interrogation required will vary depending on the gravity of the decision and its potential consequences to the company. On any assessment, however, it appears that a director must not simply follow a decision that is formed by AI and must form their own view on the issue, if the reliance defence is to be made out.

Looking ahead

It is clear from the above analysis that directors who wish to make use of AI should do so as an aide to their own decision-making, rather than as a substitute for making an independent assessment.

On every level, the law continues to expect directors to exercise an inquiring mind as to the matters before them and to interrogate the advice and information on which they rely. The risk of automation bias, where humans are inclined to assume that a decision made by machine must be correct, is significant.

In the case of AI tools, directors will need to invest in their understanding of the technology and how it is being deployed. This can be challenging in the context of complex proprietary systems, but at the very least, directors should be requiring rigorous testing of the outputs of AI tools for inbuilt biases and other problems.

At the end of this article, we have set out a series of questions that directors may choose to ask in relation to the use of AI tools in the company in order to guard against the risks identified in this article. However, in most cases, we would recommend that a director seeks advice from their General Counsel or other legal adviser about the use of AI tools in decision-making before they are deployed.

Proponents of AI may complain that imposing requirements on directors to ‘second guess’ AI defeats the purpose of the technology, and risks impeding innovation and good decision-making in Australian boardrooms. We consider, however, that the current state of the law is well placed to both support the further implementation of AI tools and preserve good governance in decision-making.

Responsibility for corporate decisions must continue to rest with a tangible being who is ultimately answerable to shareholders. This tension will support the development of good AI and the sensible application of new tools to boardroom decisions.

Further regulation of AI is also on the horizon, with jurisdictions considering the ethical and liability implications of the use of these technologies.

10 key questions for directors to ask in relation to AI

  1. Where do we use AI in our business?
  2. What decisions does it make?
  3. Who could be impacted by those decisions?
  4. Do we tell people who could be impacted by the decision that we have used AI and that they have a right to have the decision reviewed by a person?
  5. Do we understand the algorithm?
  6. Is it consistent with our values/objectives?
  7. Have we satisfied ourselves that the data source is appropriate for our specific use?
  8. Do we have a human decision-making process as part of our decision-making loop – if not, do we undertake spot checks and trend analysis of the AI-generated output?
  9. Is the decision-making process transparent — can it be audited?
  10. Did we buy the tool from a third-party vendor? If so, what warranties has the vendor given us as to performance?
Notes
  1. McKinsey & Company, April 2018, Notes from the AI frontier: Modelling the impact of AI on the world economy.

Justin Fox can be contacted on (03) 9672 3464 or by email at justin.fox@corrs.com.au.

James North can be contacted on (02) 9210 6734 or by email at james.north@corrs.com.au

Jennifer Dean can be contacted on (02) 9210 6370or by email at jennifer.dean@corrs.com.au.

Material published in Governance Directions is copyright and may not be reproduced without permission. The views expressed therein are those of the author and not of Governance Institute of Australia. All views and opinions are provided as general commentary only and should not be relied upon in place of specific accounting, legal or other professional advice.

Compliance is good, building a whistleblower regime that works is better

Next article