Skip to content
News update

Technology governance and innovation: Considering human rights among the datasets

Are modern day technological advances inclusive and aligned to human rights? How do boards and management navigate the emerging and complicated relationship between artificial intelligence (AI) and equal treatment of customers and clients?

Human Rights Commissioner at the Australian Human Rights Commission, Edward Santow, believes the opportunities are great, but organisations must be alert to the potential for new technology to impinge human rights.

Edward will speak at the ‘Technology — regulation, governance and innovation’ session at Governance Institute’s virtual National Conference along with other panelists CSIRO’s Data61 Director, Professor Jon Whittle, and Scientific Futurist Dr Catherine Ball.

Find out more

Here he gives the low down on the latest governance and human rights issues.

Q. How might the relationship between human rights and technology affect the governance of organisations and businesses?

“The first thing I would say is that we at the Human Rights Commission acknowledge there are so many opportunities that new technologies like artificial intelligence bring to our society, for example for economic development. There are also opportunities to advance human rights, and there are already companies using technology like AI to make our communities more inclusive.

“The difficulty is that for every one of the positive examples there’s also a dark side. We’ve seen in terms of the use of AI by governments and by corporations that there are real risks and threats of harm. The challenge for businesses, and particularly for boards, is to see both sides. To see the opportunities – and those opportunities are real – but also to understand and address the risks and threats of harm, because they are equally real. Unless you do the latter, you’re never going to properly grasp the former.”

Q. What considerations should businesses be making in relation to human rights when planning new technologies such as data analytics, AI, predictive technology and robotics?

“A good example is around algorithmic bias. The Human Rights Commission has just launched a paper on this topic which presents one of the biggest issues in this area of technology and human rights. The paper is framed around a practical scenario – we chose to situate it in reference to the electricity sector but it could be applicable to any similar service provision.

“AI or data-driven decision making can allow a company to trawl through large datasets with a view to identifying whether a prospective customer is going to be profitable or not. If you’re deemed to be a profitable customer, based on the algorithm applied to the dataset, then you’ll tend to get a better deal and you may get greater support if something goes wrong. If you’re deemed to be a ‘bad’ customer, you may be offered the same service but on worse terms, or you may be denied the service altogether. So the stakes are reasonably high.

“There’s also a real risk, for example, that by using older datasets companies can fail to keep up with the way in which our community is getting more equal. For example, we know that the gender pay gap between women and men is decreasing, which is a good thing in the real world. But you don’t have to be using really old data to provide a different picture. What AI tends to learn using out-of-date data is: ‘There’s a big pay gap between men and women, so women must be less good customers because they don’t earn as much money’. Whereas in fact, that’s wrong in two ways. One, the pay gap is actually decreasing (although still very definitely an issue), and two the cause of the pay gap is structural, historical injustice, it is not because women are bad customers.

“ AI can definitely improve the way that a company makes decisions but companies have to be alive to those risks. Unless you’re alive to those risks there’s a real chance you will treat your customers unfairly, or maybe even act unlawfully.”

Q. Are we (as a nation, community and individuals) prepared for absorbing rapid changes in technology while maintaining human rights principles? For example, do laws need changing, to ensure the principles of human rights can be upheld?

“There are definitely gaps in the law that need to be filled, no question. But I fear we’ve focused too much on the gaps, rather than on applying existing legislation more effectively. In the example above, it’s unlawful to discriminate against a woman whether you’re using a conventional form of decision-making or whether using a sophisticated form of decision-making using AI. If the net effect is that a woman is discriminated against, it’s equally unlawful. The biggest challenge is to make sure that we apply those existing rules as rigorously when it comes to new tech as we would in a more conventional setting.

“An area where we believe more can be done in terms of the law is around transparency of decision-making. One of the things that is truly novel about AI is that it tends to be more opaque in the way in which it’s used. It can be harder to work out what the rationale is for the decision. That’s a challenge because if you don’t know why the decision was made, you may not know whether it’s the right decision. You also may not know whether it was lawful.”

Q. How much does the responsibility of maintaining the balance between new technology programs and ensuring human rights obligations are met sit with governing boards?

“There are three big things that boards need to do in this space. The first is they need to make sure that the risks associated with AI are properly considered. This consideration has to be rigorous.

“The second is to have ongoing monitoring of the operation of a system that uses AI because the whole thing about AI is that it will learn as it goes. So you may have a situation where it’s perfect on day one but then it learns terrible things as you go.

“The third thing is boards need to be willing to make the hard decisions. That means saying ‘have we got this good enough that it’s going to comply with the law, that it’s going to treat people fairly?’. If the answer to that question is no, then it’s not safe to be used in the real world. If you’re doing something that’s customer facing that might really affect their lives, for example issuing an electricity contract or a bank loan, you need to make sure that you’re not harming people while you’re learning how a new system works.”

Edward Santow will speak at Governance Institute of Australia’s upcoming virtual National Conference in the ‘Fireside chat: Technology — regulation, governance and innovation’ session at 1.20pm on 8 December. This session is sponsored by LexisNexis.

Register here

FinCEN leaks highlight need for “radical” approach to tackling financial crime

Next article