Could the great expectations of directors be fulfilled by the great promise of AI?
Imagine a day when the chair of your nominating and governance committee briefs the board on potential candidates who might join you around the table. Topping the list is Taylor, who is incredibly intelligent, able to handle an unseemly amount of data, reaches conclusions faster than anyone else and is available 24/7. Sure, there’s room for improvement in terms of emotional intelligence but all in all a strong candidate. And, yes, she’s a bot.
Seem far-fetched? Well, not so much when you consider that it has already been five years since the first AI system was appointed to the board at Hong Kong-based firm, Deep Knowledge Ventures. The system, known as VITAL, is used to predict the success of a company at the seed-funding stage based on analysis of extensive amounts of historical data from a diverse set of sources including scientific literature, patent applications, clinical trials, and 50 other variables. With that amount of brain-power, it’s no wonder the board won’t move forward with an investment decision without the corroboration of their ‘colleague’, and the CEO even goes so far as to credit VITAL with saving the firm from multiple bad investments.
Smart Machines with Smart People
There’s certainly no shortage of hype around AI. Google’s CEO, Sundar Pichai, describes its effects as being “more profound” than fire or electricity and, while we’re still in the early innings of its impact on business, more use cases are surfacing in which AI is both augmenting human intelligence and enabling autonomous decision-making.
While AI is a catch-all term that encompasses technologies such as machine and deep learning, natural language processing, virtual assistants, chatbots and more. A layman’s way to think about it is as systems that can automate intellectual tasks normally performed by humans. The keyword here is intellectual, because automation, per se, has been around for decades.
In the third quarter of 2017, the term ‘AI’ was mentioned in earnings calls almost 800 times – a seven-fold increase on just two years prior. It’s no wonder, therefore, that AI is making its way onto every board’s agenda, driving discussions on how it can be leveraged, what risks it introduces, and how best to govern its use.
With all that, it’s only a matter of time before the most progressive directors will start to wonder how they too might leverage this immense, emergent capability in the boardroom and implement AI directly into the board decision-making process.
5 Use Cases for AI on the Board
While conventional wisdom suggests that the complex and contextual issues boards contend with are exactly the opposite of what can be automated, opportunities do exist for AI to support and enhance board decision-making and, when combined with human experience, AI offers the potential to improve governance standards overall.
While there are many downsides, risks, and legal implications to be considered before deploying these kinds of technologies in the boardroom, here are five use cases where AI could bring new innovation and value to our work as directors:
1. When decisions are critical and timeliness helps
Boards are often faced with making timely and critical decisions that, by their very nature, are complex. Often, those decisions are limited by the level of analysis management can provide, and the degree of manual effort and time involved. Machine learning, however, makes it possible for a system to take a large number of inputs and determine the likelihood of different outcomes.
For instance, merger and acquisition analysis often consumes vasts amounts of time and manpower inside organizations to develop the complex models used to assess various opportunities and their timing. AI can provide side-by-side comparisons of how various targets or competitors are growing. By factoring in multiple variables and scenarios, it could determine the optimal time and valuation of a potential transaction.
Similarly, by evaluating a company’s readiness across many variables in current market conditions, AI could determine how it might sustain an initial public offering and predict under what conditions (and when) it would perform best.
This richer and more timely level of analysis would be of great benefit to a board when deliberating such decisions.
2. Where more data could reduce enterprise risk
A major area of board focus is overseeing enterprise risk. In this regard, one of the most powerful applications of AI is in cybersecurity.
In the past, cybersecurity was largely a defensive practice which happened after the fact. Traditional approaches paired incidents with what was known about former attacks and known attackers. This dependence on knowing specific attack signatures exposed organizations to huge blind spots when a new attack pattern appeared.
Today, AI is being used to counteract cyber attacks and prevent them from occurring in the first place. By undertaking extensive analysis of normal or baseline activity, these systems can then identify any analogous behavior or outliers to normal patterns, allowing for proactive measures to be taken.
Another issue cybersecurity teams contend with is the number of false positives produced during monitoring. Now, with more things to attack (think IoT devices) and more cyber-criminals at work, organizations are relying on AI to help scale their monitoring capabilities and reduce the number of false positives.
However, as much as AI might assist in governing enterprise risk, it will also introduce new forms of risk. Organizations and boards are only now waking up to the potential downsides of AI and must act quickly before the genie is out of the proverbial bottle.
Directors will be called upon to comprehensively examine various AI use cases (eg should you monitor employee emails?) and the operational, reputational, and governance risks they introduce. One such risk is the potential for biases to be embedded into AI systems as they are developed. Gartner, for instance, estimates that in the next two years, 85% of AI projects will produce erroneous results based on biases in the data, algorithms, or the teams producing them. For boards and governance, the challenges are only beginning.
3. To nip potential problems in the bud
Beyond obvious risks such as cybersecurity, AI can also help with subverted risks that boards may be unaware of. Using a technique known as ‘partial matching’, machines can determine the probability of certain bad things happening by figuring out how many of the constituent symptoms are present. Furthermore, they can predict the ‘proximity’ (likelihood) of the outcome.
Would the Uber board have benefited from being able to identify patterns among disgruntled employees that are predictive of a toxic culture? What about Wells Fargo? Would combined data from employee performance reviews, sales, customer satisfaction and business metrics have identified an unusual set of symptoms signalling a more systemic operational problem?
Sentiment analysis is already being deployed across corporations to compare how employees are feeling in one part of the organization versus another. This data can be particularly important to boards in the wake of an M&A transaction, as these deals often succeed or fail based on cultural alignment between the two organizations. Insight into the sentiments of different employee cohorts could make for a smoother integration or facilitate proactive intervention to nip any cultural tensions in the bud.
4. Where more sources of data provide greater context
Coupling external and internal data sets to provide a broader perspective and more sophisticated insights is a rich opportunity for directors.
For instance, by scanning vast amounts of press releases and other documents, you can get insight on how your competitors are allocating their spend, what they’re investing in R&D, who they’re announcing deals with, and when they open offices in foreign countries – to name but a few examples. Such work is often manual and time-consuming but the breadth and depth of data AI can handle offers companies a far deeper understanding of their competitive context.
Having external data sets would also resolve another fundamental shortcoming with how most boards operate, which is that we generally get our data from only one source: management. In an age where employees and customers demand transparency, boards can’t afford to have major blind spots. Think of the benefit of being able to combine data from Glassdoor, Twitter and other public data sources (online user groups, industry blogs) to give you a genuine impression how the business is perceived by key stakeholders, and a checkpoint to what you hear from management or infer from employee survey results. In an era of #MeToo, #TimesUp and other forms of employee activism, boards need to be better attuned to the true sentiment of their employees.
5. Where automating and optimizing routines frees up time
One consistent theme I hear from directors is that it's a challenge to give topics the time they deserve. Thankfully, certain board procedures lend themselves to automation, which in itself is a powerful use case.
Applying machine learning to standard activities such as reviewing committee charters, related-party matrices, codes of conduct, and other such policies would free up boards to give more pressing issues the time they deserve.
Beyond that, if machine learning and textual analysis were used to scan all standard reports, documents, and filings, directors could easily identify things such as how many authors a document had and the timeline of edits. Knowing how a key document came together and whether last-minute substantial changes occurred is always helpful. Automation also ensures accuracy, completeness, and consistency by identifying any mismatches in wording or inconsistencies in numbers – surpassing what even the most diligent of directors are capable of.
Taylor the Bot? Likely Not
There may come a day in boardrooms when the wisdom of a few is replaced by a battalion of bots informing and delivering key decisions at the highest echelons of corporate governance. But, for today, the technology hasn’t matured to the point where shareholder interests are better served exclusively by bots in the boardroom. After all, we human directors have intuitive, emotional, and ethical capabilities that machines cannot match.
That said, we also need to accept that, as humans, we have certain biases and limitations. While AI can’t replace the true creativity or wisdom of experienced directors, it certainly can be used to provide us with more content and context over which to exercise our human capabilities. It can also be deployed in the future to make our processes more efficient, our practices more objective, and our output more effective.
At the end of the day, we owe it to our shareholders to innovate on their behalf and to leverage whatever capabilities are available to aid us in the performance of our duties. After all, our years of experience are great, but we stand to get a lot more experience in our years with AI by our side.
Dr. Anita Sands is an independent board director, international public speaker and creator of the #wisdomcards series. She writes and comments regularly on issues relating to boards, technology and diversity & inclusion. Find out more about her at www.dranitasands.com or follow her @dranitasands.