Why AI investments fail to deliver

The success or failure of AI initiatives has more to do with people than with technology. If you want to put AI into practice in a way that improves business outcomes, you must avoid these 6 mistakes.

Why AI investments fail to deliver
Violka08 / Getty Images

According to two recent Gartner reports, 85% of AI and machine learning projects fail to deliver, and only 53% of projects make it from prototypes to production. Yet the same reports indicate little sign of a slowdown in AI investments. Many organizations plan to increase these investments.

Many of these failures are avoidable with a little common-sense business thinking. The drivers to invest are powerful: FOMO (fear of missing out), a frothy VC investment bubble in AI companies with big marketing budgets, and, to some extent, a recognition of the genuine need to harness AI-driven decision-making and move toward a data-driven enterprise.

Instead of thinking of an AI or machine learning project as a one-shot wonder, like upgrading a database or adopting a new CRM system, it’s best to think of AI as an old-fashioned capital investment, similar to how a manufacturer would justify the acquisition of an expensive machine.

The manufacturer wouldn’t be focused on the machine as a shiny new toy, in the same way that many organizations look at AI and machine learning. The purchasing decision would consider floor space, spare parts, maintenance, staff training, product design, and marketing and distribution channels for the new or improved product. Equal thought should go into bringing a new AI or machine learning system into the organization.

Here are six common mistakes organizations make when investing in AI and machine learning.

Putting the cart before the horse

Embarking on an analytics program without knowing what question you are trying to answer is a recipe for disappointment. It is easy to take your eye off the ball when there are so many distractions. Self-driving cars, facial recognition, autonomous drones, and the like are  modern-day wonders, and it’s natural to want those kinds of toys to play with. Don’t lose sight of the core business value that AI and machine learning bring to the table: making better decisions.

Data-driven decisions are not new. R.A. Fischer, arguably the world’s first “data scientist,” outlined the essentials of making data-driven decisions in 10 short pages in his 1926 paper, “The Arrangement of Field Experiments” [PDF]. Operations research, six sigma, and the work of statisticians like Edwards Deming illustrate the importance of analyzing data against statistically computed limits as a way of quantifying variation in processes.

In short, you should start by looking at AI and machine learning as a way to improve existing business processes rather than as a new business opportunity. Begin by analyzing the decision points in your processes and asking, “If we could improve this decision by x %, what effect would it  have on our bottom line?”

Neglecting organizational change

The difficulty in implementing change management is a large contributor to the overall failure of AI projects. There’s no shortage of research showing that the majority of transformations fail, and the technology, models, and data are only part of the story. Equally important is an employee mindset that is data-first. In fact, the change of employee mindset may be even more important than the AI itself. An organization with a data-driven mindset could be just as effective using spreadsheets.

The first step toward a successful AI initiative is building trust that data-driven decisions are superior to gut feel or tradition. Citizen data scientist efforts have mostly failed because line-of-business managers or the executive suite cling to received wisdom, lack trust in the data, or refuse to yield their decision-making authority to an analytics process. The result is that “grass-roots” analytics activity—and many top-down initiatives as well—have produced more dabbling, curiosity, and résumé-building than business transformation.

If there is any silver lining it is that organizational change, and the issues involved, have been extensively studied. Organizational change is an area that tests the mettle of the best executive teams. It can’t be achieved by issuing orders from above; it requires changing minds and attitudes, softly, skillfully, and typically slowly, recognizing that each individual will respond differently to nudges toward desired behaviors. Generally, four focus areas have emerged: communication, leading by example, engagement, and continuous improvement, all of which are directly related to the decision management process.

Changing organizational culture around AI space can be especially challenging given that data-driven decisions are often counter-intuitive. Building trust that data-driven decisions are superior to gut feel or tradition requires an element of what is termed “physiological safety,” something only the most advanced leadership organizations have mastered. It’s been said so many times there’s an acronym for it: ITAAP, meaning “It’s all about people.” Successful programs often devote greater than 50% of the budget to change management. I would argue it should be closer to 60%, with the extra 10% going toward a project-specific people analytics program in the chief human resources officer’s office.

Throwing a Hail Mary pass early in the game

Just as you can’t build a data culture overnight, you shouldn’t expect immediate transformational wins from analytics projects. A successful AI or machine learning initiative requires experience in people, process, and technology, and good supporting infrastructure. Gaining that experience does not happen quickly. It took many years of concerted effort before IBM’s Watson could win Jeopardy or DeepMind’s AlphaGo could defeat a human Go champion.

Many AI projects fail because they are simply beyond the capabilities of the company. This is especially true when attempting to launch a new product or business line based on AI. There are simply too many moving parts involved in building something from scratch for there to be much chance of success.

As Dirty Harry said in Magnum Force, “A man’s got to know his limitations,” and this applies to companies too. There are countless business decisions made in large enterprises daily that could be automated by AI and data. In aggregate, tapping AI to improve small decisions offers better returns on the investment. Rather than betting on a long shot, companies would be better off starting with less glamorous, and less risky, investments in AI and machine learning to improve their existing processes. The press room might not notice, but the accountants will.

Even if you are already successfully using AI to make data-driven decisions, improving existing models may be a better investment than embarking on new programs. A 2018 McKinsey report, “What’s the value of a better model?”, suggests that even small increases in predictive ability can spark enormous increases in economic value.

Inadequate organizational structure for analytics

AI is not a plug-and-play technology that delivers immediate returns on investment. It requires an organization-wide change of mindset, and a change in internal institutions to match. Typically there is an excessive focus on talent, tools, and infrastructure and too little attention paid to how the organizational structure should change.

Some formal organizational structure, with support from the top, will be necessary to achieve the critical mass, momentum, and cultural change required to turn a traditional, non-analytic enterprise into a data-driven organization. This will require new roles and responsibilities as well as a “center of excellence.” The form that the center of excellence (COE) should take will depend on the individual circumstances of the organization.

Generally speaking, a bicameral model seems to work best, where the core of the AI responsibilities are handled centrally, while “satellites” of the COE embedded in individual business units are responsible for coordinating delivery. This structure typically results in increased coordination and synchronization across business units, and leads to greater shared ownership of the AI transformation.

The COE, led by a chief analytics officer, is best positioned to handle responsibilities like developing education and training programs, creating AI process libraries (data science methodology), producing the data catalog, building maturity models, and evaluating project performance. The COE essentially handles duties that benefit from  economies of scale. These will also include nurturing AI talent, negotiating with third-party data providers, setting governance and technology standards, and fostering internal AI communities.

The COE’s representatives in the various business units are better positioned to deliver training, promote adoption, help identify the decisions augmented by AI, maintain the implementations, incentivize programs, and generally decide where, when, and how to introduce AI initiatives to the business. Business unit reps could be augmented on a project basis by a “SWAT team” from the COE.

Not embedding intelligence in business processes

One of the most common stumbling blocks in deriving value from AI initiatives is incorporating data insights into existing business processes. This “last mile” challenge is also one of the easiest to solve using a business rules management system (BRMS). The BRMS is mature technology, having been installed in large numbers since the early 2000s, and it has gained a new lease on life as a vehicle for deploying predictive models. The BRMS makes an ideal decision point in an automated business process that is manageable and reliable. If your business is not using a BPM (business process management) system to automate (and streamline and rationalize) core business processes, then stop right here. You don’t need AI, you need the basics first—i.e., BPM and BRMS.

Most modern business rules management systems include model management and cloud-based deployment options. In a cloud scenario, citizen data scientists could create models using tools like Azure Machine Learning Studio and the InRule BRMS, with the models deployed directly to business processes via REST endpoints. A cloud-based combination such as this allows for easy experimentation with the decision-making process at a far more reasonable cost than a full-blown AI program.

Failure to experiment

Now we get to the other side of the coin. How do you use AI to create new business models, disrupt markets, create new products, innovate, and boldly go where no one has gone before? Venture-backed start-ups have a failure rate of about 75%, and they are at the bleeding edge of AI business models. If your new AI-based product or business initiatives have a lower failure rate, then you are beating some of the best investors out there.

Even the most elite technology experts fail, and sometimes often. Eric Schmidt, former CEO of Google, disclosed some of the company’s methods during 2011 Senate testimony:

To give you a sense of the scale of the changes that Google considers, in 2010 we conducted 13,311 precision evaluations to see whether proposed algorithm changes improved the quality of its search results, 8,157 side-by-side experiments where it presented two sets of search results to a panel of human testers and had the evaluators rank which set of results was better, and 2,800 click evaluations to see how a small sample of real-life Google users responded to the change. Ultimately, the process resulted in 516 changes that were determined to be useful to users based on the data and, therefore, were made to Google’s algorithm. Most of these changes are imperceptible to users and affect a very small percentage of websites, but each one of them is implemented only if we believe the change will benefit our users.

That works out to a 96% failure rate for proposed changes.

The key take-away here is that failure will occur. Inevitably. The difference between Google and most other companies is that Google’s data-driven culture allows them to learn from their mistakes. Notice as well the key word in Schmidt’s testimony: experiments. Experimentation is how Google—and Apple, Netflix, Amazon, and other leading technology companies—have managed to benefit from AI at scale.

A company’s ability to create and refine its processes, products, customer experiences, and business models is directly related to its ability to experiment.

What next?

Much like the industrial revolution swept away companies that failed to adopt machine manufacturing over hand-crafted products, the AI and machine learning sea change will wipe out companies that fail to adapt to the new environment. Although it’s tempting to think the challenges of AI are primarily technical, and to blame failures on technology, the reality is that most failures of AI projects are failures in strategy and in execution.

In many ways, this is good news for companies. The “old fashioned” business challenges behind the failures of AI projects are well understood. While you can’t avoid the necessary changes in culture, organizational structure, and business processes, some comfort can be taken in knowing that the routes have been charted; the challenge is in steering the ship and avoiding the rocks. Starting with small, simple experiments in applying AI to existing processes will help to you gain valuable experience before embarking on longer AI journeys.

Copyright © 2021 IDG Communications, Inc.