Skip to content
Journal

Interview — Human Rights Commissioner, Edward Santow — AI and human rights: A double-edged sword

The rapid explosion of artificial intelligence (AI), where machines can imitate intelligent human behaviour, can be a double-edged sword.

Australia’s Human Rights Commissioner, Edward Santow, notes that while AI can open up many opportunities that promote human rights, it can also create a host of new risks.

‘The one risk that has been focused on most fully to date has been privacy. The idea has been that AI needs fuel and the main fuel for AI is personal information. That’s how AI learns about the motivations of consumers and becomes useful to us. But it also puts at risk our personal information.’

That said, Santow believes there are a range of other human rights for which AI poses even bigger risks.

A fundamental concern is that people’s personal information could be used against them. That could result from what has become known as ‘algorithmic bias’ — a modern form of discrimination that occurs when a system reflects the implicit values of the humans involved in coding, collecting, selecting, or using data to train the algorithm.

‘An algorithm trawls through a data set to make decisions, but the data set itself may encapsulate a history of injustice. We have seen this when AI has been used in the criminal justice system in Australia and overseas.’

In one case, a 2016 investigation into a machine learning program that US courts use to predict who is likely to commit another crime after being arrested found that the software rated black people as a higher risk than whites.

‘The system had been explicitly asked not to take into account race, but the problem was that the data used was infused with decades and decades of racial bias and it was impossible to disentangle that from how AI operates,’ says Santow..

…‘algorithmic bias’ — a modern form of discrimination that occurs when a system reflects the implicit values of the humans involved in coding, collecting, selecting, or using data to train the algorithm.

‘In a country like the US, and frankly in Australia too, where you live can be a very strong indicator of your background. There are certain neighbourhoods or districts where there is a high concentration of people with a certain ethnic background. If the computer learns that there is a significantly high crime rate in a particular area, it can then make the logical leap that people of a certain ethnical background who live in that area might be more likely to commit a crime.’

Another big risk concerns a possible erosion of longstanding accountability mechanisms.

‘When you make a decision using AI, it is often less transparent,’ says Santow. ‘It’s harder to discern the basis for that decision. You may have an instinct that this decision that affects you was made unlawfully because it took into account, say, your age, disability or another factor, but you can’t necessarily determine whether that decision is correct because the decision-making process is so opaque. You may never be able to get to the bottom of whether you were discriminated against at all.’

However, Santow says things don’t necessarily have to be like that. ‘We can design AI systems in ways that make them able to explain the basis of their decisions, but we have to insist on that.

‘One filter that can indicate that something might be problematic is when you have people of a particular race, age or gender who consistently get worse outcomes. That doesn’t necessarily are being discriminated against, but if you can see a consistent trend, that should put you on notice that you need to look more closely at the decision-making process. That’s something that we already do in conventional decision-making.’

Santow says some companies tell the Australian Human Rights Commission that they are not engaging with AI at all because they are aware of its risks.

‘I can understand that. It’s big and scary and it’s expanding very quickly,’ he says. ‘But that’s such a black and white response because AI does have huge opportunities and if you are not going to engage with it, you are closing yourself off to its potential benefits.’

One route followed by some companies is to focus on ethical AI. ‘That’s a good thing, but this option focuses more on marketing and communications with consumers without necessarily going deep into the way in which companies operates and ensuring their actions match their words,’ says Santow.

‘The option we would recommend is to say there are huge opportunities and we want to be part of that, but we are also conscious of the risks, so we are going to be really upfront and are going to work out how to address those risks.

‘It may mean that you will not be the first company to market, but on the other hand, I think that if you want to build enduring trust with the community and your clients, it is really important to be walking the walk.’

Santow says one of the challenges of AI is that it’s developing so quickly. ‘We can’t just spend the next decade or two working out how to conceptualise it. We have to be swift and decisive.

‘The new discourse on ethics is good and potentially useful, but if you are going to do it properly, it will take a long time. Most companies are taking a more shorthand approach to what ethics means and then it gets reduced down into something that is difficult to apply in practice. For example, if you are software engineer, you may be told develop this very specific technology and at the same time, do no harm. The part that says do no harm is actually harder and more challenging to achieve.

‘I think that answer might be to start by applying the rules we already have, which is where human rights offers something very important. Human rights, as a body of law in its current form, has existed for over 70 years. There probably will be some gaps, but we don’t need to sit under a Bodhi tree and work out all the ethical principles. You can say that, at the very least, you are going to comply with the basic principles of international human rights law.’

Santow notes that start-ups often talk about the minimum viable product (MVP), where a new product or website is developed with sufficient features to satisfy early adopters. The final, complete set of features is only designed and developed after considering feedback from the product’s initial users.

‘Clearly, this has been a powerful approach for a lot of organisations. It’s not, in my view, minimal or viable if it doesn’t have basic human rights protections. One of the things we have been very concerned about is AI powered products that have gone out to the real world in a form that is still so rough that it doesn’t properly protect people’s rights.

‘There are some advantages of being first, but being first and then suffering a scandal or harming people, could potentially kill your business.’

In January, the Australian Human Rights Commission and World Economic Forum 2019 published a white paper to explore models of governance and leadership in AI. It asks whether Australia needs a new organisation to take a central role in promoting responsible innovation in AI and related technologies.

Such an organisation, it says, could combine capacity building, expert advice, governance, leading practices and innovative interventions that foster the benefits of AI while mitigating risks.

Submissions are due by 8 March 2019.

Material published in Governance Directions is copyright and may not be reproduced without permission.

President's commentary — One eye towards the future, another on our legacy

Next article