Overseeing artificial intelligence: Moving your board from reticence to confidence

A look at how A.I.-related risks are escalating, why this puts more pressure on corporate boards, and what steps directors can take right now to grow more comfortable in their A.I. oversight roles.

Corporations have discovered the power of artificial intelligence (A.I.) to transform what’s possible in their operations. Through algorithms that learn from their own use and constantly improve, A.I. enables companies to: 

  • Bring greater speed and accuracy to time-consuming and error-prone tasks such as entity management
  • Process large amounts of data quickly in mission-critical operations like cybersecurity
  • Increase visibility and enhance decision-making in areas from ESG to risk management and beyond

But with great promise comes great responsibility—and a growing imperative for monitoring and governance.

“As algorithmic decision-making becomes part of many core business functions, it creates the kind of enterprise risks to which boards need to pay attention,” writes Washington, D.C.–based law firm Debevoise & Plimpton.

Many boards have hesitated to take on a defined role in A.I. oversight, given the highly complex nature of A.I. technology and specialized expertise involved. But now is the time to overcome such hesitation. A.I. solutions are moving rapidly from niche to norm, and their increased adoption has heightened scrutiny by regulators, shareholders, customers, employees, and the general public.

Increasing A.I. adoption escalates business risks

Developments last year at an online real estate company dramatically illustrated the bottom-line impact when A.I. solutions go awry. At the beginning of 2021, the company launched an A.I.-powered offering that could streamline the property valuation process. After algorithms priced houses for a higher purchase price than what they could be sold for, the company reported huge losses.

Across industries, A.I. also poses challenges to environmental, social, and governance (ESG), a rising board priority. Despite A.I.’s ability to automate and accelerate data collection, reporting, and analysis, the technology can cause negative impacts to the environmental. For example, when just one image-recognition algorithm trains itself to recognize one type of image it needs to process millions of images. All of this processing requires energy-intensive data centers.

“It’s a use of energy that we don’t really think about,” Professor Virginia Dignum of Sweden’s Umeå University told the European Commission’s Horizons magazine. “We have data farms, especially in the northern countries of Europe and in Canada, which are huge. Some of those things use as much energy as a small city.”

A.I. can also have a negative impact on the “S” in ESG, with examples from the retail world demonstrating A.I.’s potential for undermining equity efforts, perpetuating bias and causing companies to overstep on customer privacy.

At one e-commerce giant, an A.I.-powered HR recruitment tool demonstrated a preference for male candidates. The algorithmic model trained on a decade’s worth of submitted resumes, mostly from men, downgraded candidates who were graduates of women’s colleges.

And in the area of data privacy, algorithms and data analytics have been found to collect and reveal sensitive information. This was the case when one retailer’s “pregnancy prediction score,” which was used to evaluate the shopping habits of expecting customers, inadvertently revealed a teenage girl’s pregnancy to her family.

Regulators expand their scrutiny to corporate boards

Regulators worldwide have been watching A.I.’s unintended consequences and responding. In April 2021, the European Commission published its draft legislation governing the use of A.I.; if passed, it would place strict requirements on companies using A.I. systems based on their potential risk to the user. Those deemed an “unacceptable risk” could be banned outright. A New York City law, which takes effect in 2023, regulates the use of “automated employment decision tools” to screen candidates, which could result in A.I.-generated bias based on race, ethnicity, or sex.

More and more, A.I. problems are becoming board problems, and A.I. oversight a board responsibility. As a result, regulatory authorities around the world are codifying board cyber and privacy liability risks into law. Examples include:

  • Principles by the Hong Kong Money Authority holding the board and senior management accountable for A.I.-driven decisions, with leadership charged to ensure appropriate A.I. governance, oversight, accountability frameworks and risk mitigation controls
  • A suggestion by the Monetary Authority of Singapore that firms set approvals for highly material A.I. decisions at the board/CEO level, with the board maintaining a central view of these decisions and receiving periodic updates of company A.I. use
  • Emphasis by the UK Financial Conduct Authority and Bank of International Settlements that boards and senior management start tackling A.I.’s major issues “because that is where ultimate responsibility for AI risk will reside,” in the words of Debevoise & Plimpton

Steps your board can take to get A.I. savvy

How can corporate boards stay on top of A.I. developments, and ahead of A.I.-related risk? Guidance follows, drawing from Debevoise & Plimpton and Diligent’s own insights.

  • Strengthen expertise: Evaluate your current A.I. knowledge base and comfort level. If you’re concerned the necessary expertise isn’t yet there, consider A.I. training—or adding another director. Also consider getting at least a few directors up to speed on your organization’s key A.I. systems: what they do, how they use data, and associated operational, regulatory, and reputational risks.
  • Designate ownership: Integrate the topic of A.I. and related risk management issues into the board’s agenda. Clearly designated roles and responsibilities. While ownership might initially reside with the full board, it could be wise to delegate that responsibility to a specific committee, whether an existing cybersecurity team or through the creation of a new committee dedicated to solely to A.I. oversight.
  • Formalize policies and procedures: Establish reporting requirements and an overall compliance structure. Effective requirements might include regular risk assessment efforts and the continuous monitoring of certain systems or introducing specific policies and procedures around how management should immediately respond to an adverse event.
  • Prioritize A.I. awareness and transparency: Internally, use briefings to remain up to speed on all A.I.-related incidents and material investigations. Externally, cultivate transparency by including detailed accounts of oversight and compliance activities in board minutes, and share activities and risks as appropriate in materials regularly made available to shareholders.

While overseeing A.I.-related risks can seem intimidating and complex, boards can simplify the process and strengthen peace of mind through education, awareness, and a plan for structured oversight.

The right technology can help your board move more quickly and confidently into an A.I. oversight role. Schedule a meeting with Diligent today to find out more.

Note: This article was created by Diligent and originally appeared on the Diligent website.