New Accenture Research on Responsible AI Reveals How Organizations Can Prepare for Impending Regulation

Print Friendly, PDF & Email

New research from Accenture shows that almost all (97%) respondents companies believe that regulation will impact them to some extent. And 95% believe that at least part of their business will be affected by EU regulations specifically. But only 6% are fully prepared to accommodate near-term and ongoing regulatory changes.

Governments and regulators are considering how to supervise and set standards for the responsible development and use of AI. The EU’s proposed AI Act is the best-known example: once ratified, anyone who wants to use, build or sell AI products and services within the EU will have to consider the legislation’s requirements for their organization. There is clear incentive to accelerate AI transformation, but organizations need to proceed carefully.

Key Findings 

  • Awareness of upcoming regulation: Nearly all (97%) respondents believe regulation will impact them to some extent. And 95% believe at least part of their business will be affected by the proposed EU AI Act specifically.
  • Plans to invest: 77% said the future regulation of AI is a current company-wide priority. 80% plan to commit at least 10% of their total AI budget to meeting regulatory requirements by 2024.
  • Concerns about clarity: Uncertainty around rollout process/timing (35%) for future regulation and the potential for inconsistent standards across regions (34%) were the largest concerns.
  • Most (94%) struggle to operationalize across all key elements of Responsible AI.
  • Only 4% of companies have a cross-functional Responsible AI team in place.  
  • 77% are both users and providers of AI solutions, yet only 12% require their suppliers to be Responsible AI compliant.  
  • 55% of companies do not have specific roles for Responsible AI in their organization. 
  • In 30% of companies, there are no active KPIs to measure Responsible AI. 

Organizations can build a Responsible AI foundation supported by these four key pillars:

  1. Clear principles and governance structures for AI (C-suite support is critical). 
  2. A risk management framework that monitors and operationalizes current and future policies.  
  3. Technology tools that support fairness, explainability, robustness, accountability and privacy. 
  4. Company culture and training that position Responsible AI as a business imperative and give employees a clear understanding of how to transfer these principles into action.  

“Companies must prepare for AI regulation now, instead of taking a ‘wait and see’ approach or viewing compliance as just checking a box for completion, both of which can become unstainable,” said Ray Eitel-Porter, global lead for Responsible AI at Accenture. “Though the proposed EU AI Act will have a two-year grace period, our experience is that it can take large companies at least this long to become compliant.”

For its latest research titled, “From AI compliance to competitive advantage: becoming responsible by design,” Accenture surveyed 850 C-suite executives in 17 countries to: (i) understand companies’ attitudes towards AI regulation and (ii) assess their readiness to comply with it.  

Sign up for the free insideBIGDATA newsletter.

Join us on Twitter: @InsideBigData1 – https://twitter.com/InsideBigData1

Speak Your Mind

*