AI Risks Losing Trust If It Does Not Offer Explanations

Print Friendly, PDF & Email

In this special guest feature, Ramprakash Ramamoorthy, Product Manager for AI and ML at Zoho Labs, discusses best practices for developing explainable AI and common pitfalls teams make when building AI tools. Ramprakash is in charge of implementing strategic, powerful AI features at ManageEngine to help provide an array of IT management products well-suited for enterprises of any size. Ramprakash is a passionate leader with a levelheaded approach to emerging technologies, and a sought-after speaker at tech conferences and events.

With investments in AI technology continuing to grow at a considerable pace, there is a gradual transition in the role that AI plays in the IT world. Presently, AI is not only a tool to automate processes, but it also automates decisions. However, a note of caution must be exercised in this regard. In its 2019 predictions, Forrester said there will be increased demand for transparent and easily understandable AI models. “45% of AI decision makers say trusting the AI system is either challenging or very challenging.” This explains why the pursuit of an explainable AI is assuming significance. When AI makes a decision, it must explain why it does so, or else it risks losing consumer trust and being ignored in the long run.

Artificial Intelligence risks losing trust if it doesn’t offer explanations

Over the past few decades, AI has gone from science fiction to an integral part of everyday business operations. According to a recent report from Microsoft and EY, “65% of organizations in Europe expect AI to have a high or a very high impact on the core business.” Looking out a bit further on the horizon, Gartner predicts that “by 2023, 40% of infrastructure and operations teams will use AI-augmented automation in enterprises, resulting in higher IT productivity.” As companies proceed from narrow AI to general AI—and begin automating not only processes, but also decisions—it’s vital that AI tools explain their behavior. 

The importance of explainable artificial intelligence cannot be overstated; it is absolutely vital that AI tools justify their decisions by offering detailed explanations. If an AI tool fails to offer an explanation as to how it reached a given decision, users may lose faith in the tool altogether.

While implementing AI tools into your business, it’s important to retrofit AI into your existing workflows. After processes are successfully automated, you can begin to automate decisions as well. Even if you have a 100-member team specializing in anomaly detection, computer vision, natural language processing (NLP), and other AI techniques, all AI decisions should require approval from a human—at least until you’ve fully honed the process. Ideally, one’s AI tools should be accurate at least 80 percent of the time, and for every single automated decision, your tools should offer an explanation, as well as confidence intervals.

Why is explainable AI so important?

In a recent report, Forrester notes that “45% of AI decision makers say trusting the AI system is either challenging or very challenging.” Thus, there’s a need for transparent and easily understandable AI models. For all decisions made by AI, there needs to be a readily available explanation.

Acknowledging this, you should offer pre-built explanations for all of your AI decisions. For example, perhaps you’re utilizing NLP and chatbots to streamline processes for technicians; if a particular request is frequently raised and directed to the same sys admin every week at the same time, the AI recognizes this pattern, automates the process, and explains why it did so.

Through “explanation-ready” AI features, you’ll be able to effectively assist IT teams with a host of security concerns, including log management, insider threat analysis, user behavior analysis, and alert fatigue management. And through AI monitoring tools, it’s easier than ever to predict anomalies, outages, combinatorial anomalies, and the root causes of outagesDuring all of these automated discoveries and decisions, an explanation for the course of action must be provided, along with confidence intervals.

Your robust solutions for DevOps and IT Operations can effectively use AI tools to assess past user behavior and then ascertain whether an action is anomalous. While accounting for seasonality, changes in schedules and processes, and time of day, these AI tools effectively predict anomalies and outages, ultimately saving your IT teams copious amounts of time and energy. 

As an example, perhaps your website monitoring tool notes that a web page loads slowly at the same time each week when it’s accessed from a certain location. AI tools will acknowledge this pattern and automatically send a ticket to the web manager via your service desk software. By integrating with multiple tools, AI automates processes, saves time, and improves productivity.  

Again, the important point to drive home is that the AI must be explainable. AI tools can suggest certain decisions; however, if these decisions don’t come with pre-built explanations, people will lose faith in the tools.

Sign up for the free insideBIGDATA newsletter.

Speak Your Mind

*