Artificial intelligence and its inner workings are so complex that we have little idea on the inner workings of the algorithms. AI remains a mystery while functioning as a ‘black box’. It is essential for designers to recognize these limitations, and we should always consider contestability in AI-led systems.
In 2019, prominent IT figures lashed out against Apple receiving 10 times higher credit scores than their partners. In a series of Twitter posts David Heinemeier Hansson rallied against Apple, claiming that the program was sexist.
Hanson, who is the creator of Ruby on Rails, filed the same financials as his wife, but apparently the algorithm thinks he deserved a 20-times higher credit limit than his wife. Apparently, no appeal works.
The Tweet sparked a series of replies, including one from Apple co-founder Steve Wozniak. Wozniak explained that the same things had happened to him and his partner, and that is was “Hard to get to a human for a correction though”.
The algorithm behind the Apple card, issued by Goldman Sachs, would be subject of an official investigation, led by the New York’s Department of Financial Services.
Contestability in AI
What makes this story stand out is the inability for the users to contest the decisions made by artificial intelligence. The system, as designed by Apple and Goldman Sachs, did not offer any way out for the users.
Contestable artificial intelligence is perhaps the most neglected aspect in AI-led user experiences. The inability for a user to have a say in the decisions made by AI can have far reaching implications. This quote by David Collingride explains it perhaps the best:
“When change is easy, the need for it cannot be foreseen; when the need for change is apparent, change has become expensive, difficult, and time-consuming.”
— Collingridge, D. (1980). The social control of technology. Frances Pinter
These problems are inherent to artificial intelligence: Machine learning uses algorithms to learn and change over time, but it also has a tendency to pick up biases from its data sets. When these biases are then reflected in decisions made by AI, they can have far reaching implications such as discrimination and exclusion.
Research in contestable AI systems
PhD research group ‘contestable AI’ at the TU Delft is specifically focusing on contestable AI by design. Their goal is:
“To ensure artificial intelligence (AI) systems respect the human rights to autonomy and dignity, they must allow for human intervention during development and after deployment.”
We can guard users against harmful decisions made by ensuring that AI-led systems are contestable by design. These systems should always be responsive to human intervention throughout the system life cycle.
Explainability is an evolving area of ML research, researchers are actively looking for ways to make models less of a black box. But these solutions are not as easy to find. In the mean time, design could lend a hand.
Problems such as fairness, transparency and accountability can not be solved by technical innovation. They rather have to be designed by creating human intervention points. That is where design comes in.
Contestable AI-systems are a power
If we design systems in such a way that the AI decisions is not definitive, we can create compromise and better the contestability of the systems itself. When designing an AI-led system think about ways to integrate compromise seeking between user and AI. Find a consensus between both the user and the system.
The implementation of contestable AI is essentially an extension of UX research. Continuous conflict which is addressed creates a flow of user-generated data that can continuously improve the system itself. The implementation of compromise is not a weakness but rather a power.
It is essential to leverage that user-generated data and design with users to create a system that benefits the user at all times, within the boundaries of the AI black-box. This continuous loop of generating data and designing better systems will create a better product in the long run.
How to design contestable systems
Designing contestable intervention points in systems does not have to be difficult — It is just necessary. Implementing moments where the user can object to decisions made by systems can be as simple as having a human check the output.
First, it is important to explain the results. It should be clear how the algorithm produces the output. It may not always be clear, especially because of the ‘black box’ property of some algorithms. It is important to then explain how the data is being used and what the user can expect.
It is important to indicate wrong or less credible answers by altering your visual design or layout. Do not be afraid to let the user know you do know have an answer.
AI might be suited for some situations and function as an extension of the users abilities. If the user is using AI, let them be in control. Users should always be able to intervene in or ignore the output of any AI powered system.
I have already written on how to design with AI led systems before. When designing with AI-led algorithms it is good to follow these common practices:
An army of scan cars is being used for widespread surveillance in Amsterdam. The scan cars are part of a municipal program to monitor over-usage of parking spaces throughout the city.
The scan cars use a camera mounted on the roof of the car to identify and issue parking fees using object recognition. The cars are automating the process of license plate identification and background checks with specific scanning equipment and AI-based identification service.
The service is currently in use for more than 150,000 street parking spaces the city of Amsterdam. Since the surveillance is fully automated, the city has received worrying calls from this type of surveillance.
The loss of autonomy for citizens and the problems associated with automating these processes has led to new developments around the service. That is why the municipality set out to create a more contestable AI system. Together with experts in the field, the municipality has opened up ways for users to contest the AI-led service. As best explained on their website:
“Together with UNSense, we invited representatives from the City of Amsterdam and Rotterdam, TADA and researchers from TU Delft to join us for a 3-day sprint to design “the scan car of the future”, that also looks at the human and fair values of the advances in technology.
During these sessions, several design strategies were explored. Among others, participants investigated if the sensing the car does could be minimized, if function(s) of the car could become more understandable and what features could be added to possibly bring about benefits for the individual — being the citizens of Amsterdam.”
The implications of asking questions such as: “What if you could talk back?” or “What if you could talk back?” opens up immense user-centered design opportunities. The scary surveillance monster has since been redesigned into a more contestable counterpart, empowering citizens in the process.
I feel that the design of scan car is a perfect example of how contestable AI systems are a power. The city now has data on the usage of parking spaces, and can redesign their city accordingly. The system itself has improved too. It is not scary to use, and can not be misused as long as these contestable qualities are in place. The win is on both sides. Both users and product owners can now benefit from the system, while providing autonomy and freedom to everyone.
Read the full article here