5 Things Board Members Should Consider When It Comes to AI
AI is on everyone’s mind: trying it, overseeing it, contemplating it on a philosophical level. Best practices are still evolving yet there’s no shortage of opinions about what AI means for work, society, humanity…and on and on. Individuals are starting to understand how to use AI tools to enhance productivity in their work lives. But for a board, trying out AI tools is one thing; governing its use throughout an entire organization is a whole other challenge. Where do you start?
We were curious what advice AI would offer about governance so we asked ChatGPT, “What are the five things board members should consider when it comes to AI?” The results confirm what we’ve seen and heard about the current state of generative AI: while ChatGPT offered a decent summary of the issues, it falls short in many respects. Like so many critical functions, a ChatGPT information query gives us a start, but pales when it comes to human understanding and expertise, much less judgment and nuance.
Below, we share Chat GPT’s verbatim recommendations for establishing AI governance (in italics), followed by Boardspan’s informed commentary and clarifications.
1. Understand the Capabilities and Limitations of Al
ChatGPT: As a board member, it is crucial to have a solid understanding of the capabilities and limitations of Al technology. Educate yourself about different Al applications, such as machine learning, natural language processing, and computer vision, and how they can benefit your organization. Recognize that Al is not a magic solution but a tool that requires careful consideration and human oversight.
Boardspan: Our first reaction upon seeing this answer was, “Well, that’s obvious.” But maybe it’s not. You’ve probably read enough to understand what AI is (and isn’t). However, given the evolution in AI tools and its rapid proliferation over the past year, now is the time to ensure your knowledge is current.
As a board member, you don’t need to be training AI models or becoming an expert on all the details. But you should know what AI is capable of and used for today, its larger implications, and its potential future impact. Here’s where to start:
- The Basics: deeplearning.ai (founded by noted AI expert Andrew Ng) offers a wealth of information, including a course called AI for Everyone that offers a robust explanation of the various types of AI and their uses
- Impact on Society: The Economist has published a number of thought-provoking pieces on the impact of AI on the economy and society, like this one
- Business Considerations: McKinsey recently released the report, What Every CEOs Needs to Know About Generative AI, which gives solid guidance for understanding business impact and AI governance. Forbes also recently published a piece about the top questions CEOs are asking McKinsey about AI
- Ethical Implications: UNESCO has published the first global standard concerning AI ethics
These resources and others can help you deepen your understanding of the current state of AI before you help define policies for the use of AI within your organization. Of course, any organizational policies will involve management’s perspective, so the board will want to ensure that executive leadership is also doing its homework to understand AI and its specific uses and potential across the organization.
2. Foster a Culture of Ethical Al Use
ChatGPT: Ensure that ethical considerations are at the forefront of Al implementation within your organization. Establish guidelines and policies that address issues such as data privacy, bias mitigation, transparency, and accountability. Encourage the development of Al systems that align with your organization's values and promote responsible and fair use of Al technologies.
Boardspan: With all respect to ChatGPT, ethics are a fundamentally human consideration that goes much deeper than what’s mentioned above; there is no AI shortcut for thinking through complex ethical questions. We know from our conversations with boards, and from Boardspan’s Board Performance Assessment Benchmark scores, that ethics and fiduciary responsibilities are topics that boards take very seriously.
Before any governance decisions get made, boards and executive leadership need to be in absolute alignment on ethical use of AI. That means not only understanding and aligning around the clear ethical boundaries that apply on a universal level (as outlined in standards like the UNESCO recommendations), but also around how to define ethics as they relate to the specific use cases within your organization, and organizational culture and values.
If there was ever a time when stakeholder opinions might run high, incorporating AI into any aspect of your products, services, or workflow is one of them. It will be critical to consider multiple perspectives as you establish and communicate your organization’s ethical boundaries. In addition to the issues surfaced by ChatGPT (data privacy, bias mitigation, transparency, and accountability), consider other critical questions:
- How do you address flaws in AI’s current capabilities, such as accuracy?
- How much AI is too much in various scenarios?
- What are the consequences of breaching ethical lines in the sand?
Once agreed upon, the ethical parameters of AI usage should be clearly articulated and disseminated to all relevant stakeholders.
3. Evaluate Risks and Implement Robust Governance
ChatGPT: Al can introduce new risks and challenges, including potential biases, security vulnerabilities, and legal and regulatory compliance issues. Work closely with management to assess these risks and implement robust governance frameworks. This includes establishing clear lines of responsibility, ensuring data quality and integrity, and regularly monitoring and auditing Al systems to mitigate risks effectively.
Boardspan: On this point, ChatGPT gets it. Any organization embracing AI in a substantive way will need to design frameworks for assessing the risks posed and the legal, regulatory, and compliance issues raised. For example:
- If your organization already has fundamental data management and technology oversight in place, what additional oversight may be required for AI?
- Which Committee(s) of the board will be responsible for AI oversight?
- Does the board have the appropriate talent and expertise to adequately oversee AI?
When it comes AI oversight, the Risk Committee is a natural starting place. And just as we have previously seen some organizations form a Cybersecurity Committee or an ESG Committee to oversee new and significant governance initiatives, we wouldn’t be surprised to see some boards form an AI Committee to establish the appropriate governance frameworks and oversight mechanisms. Some might form an ad hoc AI Committee to fully consider the oversight issues, then hand off ongoing oversight to a standing committee such as Risk. Whatever approach a board uses to get its hands around AI governance, oversight will ultimately involve the full board, as organizations contemplate the risks, the opportunities, and where AI fits into the overall strategy.
We all recognize that technology governance is an ever-evolving and -expanding board responsibility. Our advice here is to ensure your board has a robust process for evaluating its own performance and ensuring it has the talent needed to oversee engagement with AI, including a conscientious approach to board succession planning.
4. Invest in Data Quality and Infrastructure
ChatGPT: Al relies heavily on quality data to provide accurate and reliable insights and predictions. Board members should advocate for investments in data management infrastructure, data governance practices, and data quality assurance processes. Collaborate with management to ensure that proper data collection, labeling, and storage procedures are in place to support Al initiatives effectively.
Boardspan: In a broad sense, “Garbage In, Garbage Out” has never been truer than with AI. But ChatGPT is jumping the gun by encouraging boards to push for increased technology investment. When considering AI‘s impact on an organization’s data technology and operations, you shouldn’t automatically assume that the organization will need to make infrastructure changes. We recommend that you spend time with technology owners and stakeholders to understand an organization’s existing data technology infrastructure and what is required to use AI effectively and productively before advocating for investment.
There’s an equally important aspect that ChatGPT‘s response doesn’t mention. Most likely, an employee will interface with AI using a third-party tool, one whose inner workings are largely out of their control…like ChatGPT. No offense, but for the results it generates, ChatGPT draws on a vast buffet of content, the provenance and quality of which can be unacceptably vague. In the case of third-party tools, there must be policy governing their use that addresses:
- Appropriate use cases
- Enforcing quality and accuracy
- Adhering to compliance and confidentiality requirements
5. Promote Collaboration between Humans and Al
ChatGPT: Recognize that Al technologies work best when they augment human capabilities rather than replace them. Encourage a culture of collaboration between humans and Al systems, emphasizing the value of human judgment, creativity, and ethical decision-making. Foster an environment where employees feel empowered to provide feedback.
Boardspan: On an intellectual level and a technological one, generative AI in its current state works best when used to enhance human productivity, not replace it. On an emotional level, many professionals–including software developers, marketers, lawyers, and knowledge workers of all kinds–may perceive an existential threat to their professional value and livelihoods. A media frenzy around AI and a wealth of uninformed opinions on social media certainly don’t disabuse us of such notions. Sorting out how AI can and should be used within the organization, and where it intersects with team members, and then communicating all this as transparently as possible is a worthy goal.
Nevertheless, we don’t agree with the blanket statement that the board should “encourage a culture of collaboration between humans and AI systems.” Instead, we recommend every organization:
- Carefully consider the ways team members might use AI
- Establish guidelines that promote appropriate usage
- Set clear boundaries to discourage inappropriate uses of AI
Every organization’s AI framework will be different. Leadership should drive the creation of policy, but it’s the board’s job to ensure that all these points (and more) have been carefully considered and the balance of human contribution and AI aligns with ethics, values, culture, and overall benefit to the organization.
Where we wholeheartedly agree with ChatGPT is that, as enticing as AI is from a productivity and cost-cutting perspective, human creativity, experience, empathy, and judgment are currently irreplaceable. As a board, one of your most important contributions will be to set a tone at the top that reinforces that view within the organization.
[Photo by Mariia Shalabaieva at Unsplash]