Researchers from Melbourne’s Monash University have developed a framework they say could be used to provide greater transparency into the type of data being fed into artificial intelligence (AI) systems, such as driverless cars and medical equipment.
The researchers outline in their research paper, What information is required for explainable AI? A provenance-based research agenda and future challenges, that there are six necessary elements of information that need to be considered: Who is responsible for the data generation/modification/explanation; when and where is the explanation given; why is the explanation generated; what is being processed/explained; and which security/safety information is added for explanation. They say using this framework can help demonstrate that information fed into an AI system is authentic, has been securely propagated, and is enough to provide end-users sufficient information about the AI decision-making systems.
“We are ultimately trying to achieve better explainability, both from humans and security-aware aspects of AI systems. In our paper, we suggest the adoption of a human-centric perspective, which is all about looking at and understanding the data used for decision generation in AI-based systems,” Monash University PhD researcher Fariha Jaigirdar explained.
The researchers also emphasised that being able to explain how AI decisions are made is critical given that AI has a “considerable impact” on people’s everyday lives.
“Explainable AI seeks to provide greater transparency into how algorithms make decisions. The reason this is so important is because the decisions [that] AI systems make could ultimately lead to an incorrect medical diagnosis or a pedestrian being struck by a driverless car,” Monash University Faculty of IT associate professor Carsten Rudolph said.
“Even if the overall system might be working perfectly, it is essential to know the very root of performing the decision, especially when the decision or prediction is crucial.”
Read also: Genevieve Bell and what the future of AI might look like
The paper further added that the current relationship between input data, training data, and resulting classifications, as well as the provenance of the various inputs, are not obvious to end-users.
“Without documenting the detailed connection of the training data-set with the decision generated, the overall system’s acceptability is questionable … for example, a woman walking across a road in Arizona on March 18, 2018, was struck and killed by a self-driven and it showed to be difficult to identify the cause of the accident,” the paper stated.
Earlier this month, Monash and QUT proposed a blockchain-based solution to local energy trading, where location and electrical distances were the core price drivers, and therefore included an anonymous proof of location algorithm.
The researchers claimed the decentralised system would encourage energy producers and consumers to negotiate “truthfully”.
Related Coverage
Artificial intelligence can influence human decision-making, new Data61 study reveals
AI can exploit the vulnerabilities of a person’s decision-making habits and patterns.
Monash University and The Alfred to develop AI-based superbug detection system
The system will leverage tens of thousands of data points per patient to help predict treatment responses and patient outcomes.
Monash University and RMIT develop AI and AR device to read emotional cues
Designed to augment emotional communication beyond traditional settings.
Monash University researchers use AI technology to examine hospital readmissions
In hope that it will relieve some pressure off the healthcare system.