What a wild two weeks it's been. Seems like every day we woke up to a remarkable new development in the OpenAI story. Sam Altman is out, he’s possibly at Microsoft, he’s back in…and now the analysis begins. Throughout the unfolding events and even now, all eyes are on the board. Some people are calling OpenAI a massive corporate governance failure. Others are saying the board was simply adhering to the company’s original mission. Or is the truth somewhere in between? At any rate, Sam Altman says this week that he is putting governance front and center as part of his return.
In other news, Remembering Berkshire Vice Chairman Charlie Munger; Cigna and Humana announce merger talks; Changpeng Zhao resigns as Chairman of Binance; and Elliot Management takes a $1B stake in Phillips 66, calling for a board revamp.
In the Spotlight
Back at OpenAI, Sam Altman Outlines His Priorities
Governance is a big one; Microsoft joins the board as a nonvoting member
“OpenAI said on Wednesday that it had completed the first phase of a new governance structure that added Microsoft as a nonvoting board member, as it works to end the divisions that fueled the ouster of Sam Altman as chief executive and sets itself up for a future as a bigger company. In a blog post, Mr. Altman, who was rapidly reinstated last week, also outlined his priorities for OpenAI as he retakes the reins of the high-profile artificial intelligence start-up. He said the company would resume its work building safe A.I. systems and products that benefited its customers. He added that its board would focus on improving governance and overseeing an independent review of the events that led to and followed his removal as chief executive.” THE NEW YORK TIMES
How Effective Altruism Split Silicon Valley–and Influenced OpenAI
Can OpenAI exist as both a standards bearer and a profitable organization?
“The effective-altruism community has spent vast sums promoting the idea that AI poses an existential risk. But it was the release of ChatGPT that drew broad attention to how quickly AI had advanced, said Scott Aaronson, a computer scientist at the University of Texas, Austin, who works on AI safety at OpenAI. The chatbot’s surprising capabilities worried people who had previously brushed off concerns, he said…The turmoil at OpenAI exposes the behind-the-scenes contest in Silicon Valley between people who put their faith in markets and effective altruists who believe ethics, reason, mathematics and finely tuned machines should guide the future.” THE WALL STREET JOURNAL
Microsoft Needs a Better Seat at OpenAI’s Table
The crisis exposed Microsoft’s vulnerability and reliance on the startup
“...the drama exposed a weak point in the AI strategy that has helped Microsoft’s stock outperform most of its major tech peers recently. The close partnership with OpenAI has given Microsoft a leg up in the race over generative artificial intelligence. Microsoft solidified that relationship with a $10 billion investment in the startup in January, two months after the high-profile launch of OpenAI’s ChatGPT chatbot. That investment gave Microsoft a 49% ownership stake in OpenAI but no board seat or any actual power because of the startup’s unusual governance structure…The fact that Microsoft was willing to lay out potentially billions more to bring top-shelf AI talent in house further drove the point home that the company needs for OpenAI to survive in some form or another.” THE WALL STREET JOURNAL
Prominent Women in Tech Don’t Want to Join OpenAI’s Board
As the board plans to add more members, many women say, “no thanks”
“The specifics of the boardroom overthrow attempt remain a mystery. Of those six, (Adam) D’Angelo is the only one left standing. In addition to (Bret) Taylor, the other new board member is former US Treasury secretary Larry Summers, a living emblem of American capitalism who notoriously said in 2005 that innate differences in the sexes may explain why fewer women succeed in STEM careers (he later apologized)...Prominent AI researcher Timnit Gebru, who was fired by Google in late 2020 over a dispute about a research paper involving critical analysis of large language models, has been floated in the media as a potential board candidate. She is, indeed, a leader in responsible AI; post-Google, she founded the Distributed AI Research Institute, which describes itself as a space where “AI is not inevitable, its harms are preventable.” If OpenAI wanted to signal that it is still committed to AI safety, Gebru would be a savvy choice. Also an impossible one: She does not want a seat on the board of directors.” WIRED
From Boardspan this Week:
Navigating Disruptive Risk
Disruptive risks can make or break the board
"Envisioning a company’s future is hard and imprecise work. But it’s increasingly clear that dedicating time to think about the future is vital to navigating the disruptive risks that are shaking up industries and upending business models…While the full board has responsibility for overseeing strategic risks—and disruptive risks are generally strategic risks—board committees have important oversight responsibilities as well. And committees can bring increased focus and attention where required. Which board committee has responsibility for overseeing each of the disruptive risks management and the board have identified as posing a threat to core strategic assumptions?” KPMG via BOARDSPAN