A look at how A.I.-related risks are escalating, why this puts more pressure on corporate boards, and what steps directors can take right now to grow more comfortable in their A.I. oversight roles.
Corporations have discovered the power of artificial intelligence (A.I.) to transform what’s possible in their operations. Through algorithms that learn from their own use and constantly improve, A.I. enables companies to:
But with great promise comes great responsibility—and a growing imperative for monitoring and governance.
“As algorithmic decision-making becomes part of many core business functions, it creates the kind of enterprise risks to which boards need to pay attention,” writes Washington, D.C.–based law firm Debevoise & Plimpton.
Many boards have hesitated to take on a defined role in A.I. oversight, given the highly complex nature of A.I. technology and specialized expertise involved. But now is the time to overcome such hesitation. A.I. solutions are moving rapidly from niche to norm, and their increased adoption has heightened scrutiny by regulators, shareholders, customers, employees, and the general public.
Increasing A.I. adoption escalates business risks
Developments last year at an online real estate company dramatically illustrated the bottom-line impact when A.I. solutions go awry. At the beginning of 2021, the company launched an A.I.-powered offering that could streamline the property valuation process. After algorithms priced houses for a higher purchase price than what they could be sold for, the company reported huge losses.
Across industries, A.I. also poses challenges to environmental, social, and governance (ESG), a rising board priority. Despite A.I.’s ability to automate and accelerate data collection, reporting, and analysis, the technology can cause negative impacts to the environmental. For example, when just one image-recognition algorithm trains itself to recognize one type of image it needs to process millions of images. All of this processing requires energy-intensive data centers.
“It’s a use of energy that we don’t really think about,” Professor Virginia Dignum of Sweden’s Umeå University told the European Commission’s Horizons magazine. “We have data farms, especially in the northern countries of Europe and in Canada, which are huge. Some of those things use as much energy as a small city.”
A.I. can also have a negative impact on the “S” in ESG, with examples from the retail world demonstrating A.I.’s potential for undermining equity efforts, perpetuating bias and causing companies to overstep on customer privacy.
At one e-commerce giant, an A.I.-powered HR recruitment tool demonstrated a preference for male candidates. The algorithmic model trained on a decade’s worth of submitted resumes, mostly from men, downgraded candidates who were graduates of women’s colleges.
And in the area of data privacy, algorithms and data analytics have been found to collect and reveal sensitive information. This was the case when one retailer’s “pregnancy prediction score,” which was used to evaluate the shopping habits of expecting customers, inadvertently revealed a teenage girl’s pregnancy to her family.
Regulators expand their scrutiny to corporate boards
Regulators worldwide have been watching A.I.’s unintended consequences and responding. In April 2021, the European Commission published its draft legislation governing the use of A.I.; if passed, it would place strict requirements on companies using A.I. systems based on their potential risk to the user. Those deemed an “unacceptable risk” could be banned outright. A New York City law, which takes effect in 2023, regulates the use of “automated employment decision tools” to screen candidates, which could result in A.I.-generated bias based on race, ethnicity, or sex.
More and more, A.I. problems are becoming board problems, and A.I. oversight a board responsibility. As a result, regulatory authorities around the world are codifying board cyber and privacy liability risks into law. Examples include:
Steps your board can take to get A.I. savvy
How can corporate boards stay on top of A.I. developments, and ahead of A.I.-related risk? Guidance follows, drawing from Debevoise & Plimpton and Diligent’s own insights.
While overseeing A.I.-related risks can seem intimidating and complex, boards can simplify the process and strengthen peace of mind through education, awareness, and a plan for structured oversight.
The right technology can help your board move more quickly and confidently into an A.I. oversight role. Schedule a meeting with Diligent today to find out more.
Note: This article was created by Diligent and originally appeared on the Diligent website.