A new report documents the business benefits of ‘responsible AI’ | Tech Rasta

[ad_1]

As companies embrace artificial intelligence to drive business strategy, the issue of responsible AI implementation is gaining traction.

A new global research study defines responsible AI as “a framework of principles, policies, tools and processes to ensure that AI systems are developed and managed in good service to individuals and society, while achieving transformative business impact.”

A study conducted by MIT Sloan Management Review and Boston Consulting Group found that while AI initiatives are on the rise, responsible AI is lagging behind.

While most organizations surveyed said they consider AI responsible for mitigating technology risks, including issues of safety, bias, fairness and privacy, they admit to a failure to prioritize RAI. The gap increases the chance of failure and exposes companies to regulatory, financial and customer satisfaction risks.

The MIT Sloan/BCG report, which included survey results as well as interviews with C-level executives and AI experts, identified significant gaps between companies’ interest in RAI and their ability to implement practices across the enterprise.

52%

52% of companies are practicing some level of responsible AI, but 79% of them say their implementations are limited in scale and scope.

The survey, conducted in the spring of 2022, analyzed responses from 1,093 participants representing organizations from 96 countries and reporting annual revenue of at least $100 million in 22 industries.

A majority (84%) of survey respondents believe RAI should be a top management priority, but only slightly more than half (56%) affirm that RAI has achieved that status, and only a quarter say they have a fully mature RAI program. .

More than half (52%) of respondents said their organizations were implementing some level of RAI practices, while 79% admitted their implementations were limited in scale and scope.

Why are companies struggling so much when it comes to RAI? Part of the problem is confusion over the term itself — which overlaps with ethical AI — as cited by 36% of survey respondents, who admitted there is little consistency in practice as it is still being developed.

Other factors contributing to limited RAI implementation fall under the general organizational challenges bucket:

  • 54% of survey respondents struggle to find RAI expertise and talent.
  • 53% of staff members have no training or knowledge.
  • 43% of senior leaders report limited priority and attention.
  • Inadequate funding (43%) and awareness of RAI programs (42%) also hinder the maturity of RAI programs.

As AI becomes more entrenched in business, there is increasing pressure on companies to bridge these gaps and successfully implement RAI as a priority, the report said.

“As we navigate the increasing complexity and unknowns of an AI-driven future, establishing a clear ethical framework is not optional — it’s vital to its future,” said Rianka Roy Chaudhary, a CodeX Fellow and one of the Stanford Law School Computational Law Centers. AI experts interviewed for the report.

Doing RAI right

Companies with the most mature RAI programs — in the case of the MIT Sloan/BCS survey, where nearly 16% of respondents called themselves “RAI leaders” — have several things in common. They see RAI as an organizational problem, not a technical one, and they are investing time and resources to create comprehensive RAI programs.

Related Articles

These organizations adopt a more strategic approach to RAI, with a broader view of corporate values ​​and responsibility towards society as a whole as well as numerous stakeholders.

Taking a leadership role in RAI translates into measurable business benefits such as better products and services, improved long-term profitability, improved recruitment and retention. Forty-one percent of RAI leaders affirmed realizing a measurable business benefit, compared to only 14% of companies investing less in RAI.

RAI leaders are also better equipped to deal with an increasingly active AI regulatory environment – ​​more than half (51%) of RAI leaders are prepared to meet the requirements of evolving AI regulations compared to less than a third of organizations with new RAI initiatives. Survey found.

Companies with mature RAI programs adhere to some common best practices. Among them:

Make RAI part of the executive agenda. RAI is not just a “check the box” exercise, but part of the organization’s top management agenda. For example, about 77% of RAI leader organizations invest material resources (training, talent, budget) in RAI efforts, compared to 39% of total respondents.

There was a clear message from the top that responsible implementation of AI was an organizational priority, rather than product managers or software developers dictating RAI decisions.

“Without leadership support, practitioners may lack the incentives, time and resources needed to prioritize RAI,” said Steven Vosloo, a digital policy specialist in UNICEF’s Office of Global Insight and Policy and one of the experts interviewed for the MIT Sloan/BCG survey. .

In fact, nearly half (47%) of RAI leaders say they include the CEO in their RAI efforts, more than double that of their peers.

Take a broader view. In addition to top management involvement, mature RAI programs also have a wide range of participants in these efforts — an average of 5.8 roles in leading companies and only 3.9 roles from non-leaders, the survey found.

A majority of leading companies (73%) are approaching RAI as part of their corporate social responsibility efforts, considering society as a key stakeholder as well. For these companies, the values ​​and principles that determine their approach to responsible behavior apply to their entire portfolio of technologies and systems, including processes such as RAI.

“Many of the core ideas behind responsible AI, such as bias prevention, transparency and fairness, are already aligned with the basic principles of corporate social responsibility,” said Nitzan Mekel-Bobrov, eBay’s chief AI officer and one of the experts interviewed. For the survey. “So it should already feel natural for an organization to be tied to its AI efforts.”

Start early, not after the fact. The survey shows that it takes an average of three years to start reaping business benefits from RAI. Therefore, companies should start RAI programs as soon as possible, develop the necessary skills and provide training. AI experts interviewed for the survey suggested raising RAI maturity earlier than AI maturity to avoid failures and significantly reduce the ethical and business risks associated with scaling AI efforts.

Given the high stakes surrounding artificial intelligence, RAI should be prioritized not just as a technical issue but as an organizational mandate. Companies that can connect RAI to their mission to be a responsible corporate citizen will have the best results.

Read the report

[ad_2]

Source link