Skip to main content

IBM Research launches explainable AI toolkit

IBM Watson IoT Center Munich
Image Credit: IBM Watson

Join us in Atlanta on April 10th and explore the landscape of security workforce. We will explore the vision, benefits, and use cases of AI for security teams. Request an invite here.


IBM Research today introduced AI Explainability 360, an open source collection of state-of-the-art algorithms that use a range of techniques to explain AI model decision-making.

The launch follows IBM’s release a year ago of AI Fairness 360 for the detection and mitigation of bias in AI models.

IBM is sharing its latest toolkit in order to increase trust and verification of artificial intelligence and help businesses that must comply with regulations to use AI, IBM Research fellow and responsible AI lead Saska Mojsilovic told VentureBeat in a phone interview.

“That’s fundamentally important, because we know people in organizations will not use or deploy AI technologies unless they really trust their decisions. And because we create infrastructure for a good part of this world, it is fundamentally important for us — not because of our own internal deployments of AI or products that we might have in this space, but it’s fundamentally important to create these capabilities because our clients and the world will leverage them,” she said.

VB Event

The AI Impact Tour – Atlanta

Continuing our tour, we’re headed to Atlanta for the AI Impact Tour stop on April 10th. This exclusive, invite-only event, in partnership with Microsoft, will feature discussions on how generative AI is transforming the security workforce. Space is limited, so request an invite today.
Request an invite

The toolkit is also being shared, Mojsilovic said, because industry progress on the creation of trustworthy AI has been “painfully slow.”

AI Explainability 360 draws on algorithms and papers from IBM Research group members. Source materials include “TED: Teaching AI to Explain Its Decisions,” a paper accepted for publication at the AAAI/ACM conference on AI, ethics, and society, as well as the often cited “Towards Robust Interpretability with Self-Explaining Neural Networks,” accepted for publication at NeurIPS 2018.

The toolkit draws on a number of different ways to explain outcomes, such as contrastable explanations, an algorithm that attempts to explain important missing information.

Algorithms are also made for particular scenarios, such as Protodash for explaining prototypes, and algorithms designed to explain credit score model results to a consumer recently denied a loan, or to a loan officer who needs an explanation of AI model decision-making in order to comply with law.

All eight algorithms in the toolkit come from IBM Research, but more from the wider AI community will be added in the future.

“It’s not one team of researchers or one organization that can move the needle; we all benefit when we join forces and do it together, and that’s why we intend to grow the toolbox,” Mojsilovic said.

VB Daily - get the latest in your inbox

Thanks for subscribing. Check out more VB newsletters here.

An error occured.