With Sam Altman reinstated as CEO and a refreshed board, OpenAI looks to be getting back to business. But the conversation around the past, present, and future of AI shows no signs of slowing down. How did we get here? Where are we going? And most important for boards, how do we govern it? This week, The New York Times takes an in-depth look at the “AI race,” including how modern artificial intelligence got its start, how it changed the tech industry, and how the world is trying to define policy around it. This week we also see more analysis of how corporate governance will be shaped by AI.
In other news, Meta and Google both make well-timed AI moves; questions remain about OpenAI’s new board; and how to build the right advisory board for startup success; and what keeps the SEC’s chief accountant up at night.
In the Spotlight
Ego, Fear, and Money: How the AI Fuse was Lit
How tech industry’s power player set modern artificial intelligence in motion
“The question of whether artificial intelligence will elevate the world or destroy it — or at least inflict grave damage — has framed an ongoing debate among Silicon Valley founders, chatbot users, academics, legislators and regulators about whether the technology should be controlled or set free. That debate has pitted some of the world’s richest men against one another: Mr. Musk, Mr. Page, Mark Zuckerberg of Meta, the tech investor Peter Thiel, Satya Nadella of Microsoft and Sam Altman of OpenAI. All have fought for a piece of the business — which one day could be worth trillions of dollars — and the power to shape it.” THE NEW YORK TIMES
Inside the AI Arms Race that Changed Silicon Valley Forever
Chat GPT’s release triggered a desperate scramble to stay relevant
“At 1 p.m. on a Friday shortly before Christmas last year, Kent Walker, Google’s top lawyer, summoned four of his employees and ruined their weekend…For weeks they had been prepping for a meeting of powerful executives to discuss the safety of Google’s products. The deck was done. But that afternoon Mr. Walker told his team the agenda had changed, and they would have to spend the next few days preparing new slides and graphs. In fact, the entire agenda of the company had changed — all in the course of nine days. Sundar Pichai, Google’s chief executive, had decided to ready a slate of products based on artificial intelligence — immediately…It was an edict, and edicts didn’t happen very often at Google. But Google was staring at a real crisis. Its business model was potentially at risk.” THE NEW YORK TIMES
How Nations are Losing a Race to Regulate AI
Across the globe, technologies are evolving more rapidly than policies
“When European Union leaders introduced a 125-page draft law to regulate artificial intelligence in April 2021, they hailed it as a global model for handling the technology. E.U. lawmakers had gotten input from thousands of experts for three years about A.I., when the topic was not even on the table in other countries. The result was a “landmark” policy that was “future proof,” declared Margrethe Vestager, the head of digital policy for the 27-nation bloc. Then came ChatGPT.” THE NEW YORK TIMES
AI Companies Can Define Their Public Purpose Through Governance
Under PBC rules, there is no fiduciary obligation to prioritize one set of interests over another
“Artificial intelligence, particularly generative AI, is mostly developed, owned, and controlled by a small, powerful group of private companies operating under US laws and management structures. This means US corporate governance is crucial to regulating AI in a safe manner that broadly benefits humanity. To meet this challenge, lawmakers should build on existing US corporate governance infrastructure. Adding to federal regulatory guidelines, corporate governance should de-emphasize shareholder primacy and embrace a stakeholder model akin to the Delaware public benefit corporation, or PBC.” BLOOMBERG
AI Is Testing the Limits of Corporate Governance
Can AI safety research shed any light on old corporate governance problems?
“The boardroom war at OpenAI, the company behind ChatGPT, has put a spotlight on the role of corporate governance in AI safety. Few doubt that AI is going to be disruptive for society, and governments are beginning to devise regulatory strategies to control its social cost. In the meantime, however, AI is being developed by private firms, run by executives, supervised by boards of directors, and funded by investors. In other words, what is likely to prove the most important technological innovation of our lifetime is currently overseen by corporate governance — the set of rules, mostly of private creation, that allocate power and manage conflicts within a corporation.” HARVARD BUSINESS REVIEW
From Boardspan this Week:
Value-Focused Corporate Governance
There is no longer room for laissez-faire boards or board-management power struggles
"Corporate governance—the system by which a company’s board of directors and management executives align themselves with shareholders’ interests in order to make strategic decisions—can be a catalyst (or constraint) to value creation. Value creation is a product of business fundamentals (what the company actually does and how it performs) and investor perceptions (how the market prices the company’s expected future performance). Effective corporate governance enhances these two elements, primarily through greater transparency and more effective decision making, and thus generates more value for shareholders. Today, well-functioning boards of directors play an increasingly important part in shaping corporate performance and investor perception.” BOSTON CONSULTING GROUP via BOARDSPAN |
Across the Board
Google Announces AI System Gemini
The company claims new, powerful algorithms outperform GPT-4
“Google said Wednesday it would offer a range of AI programs to customers under the Gemini umbrella. It touted the software’s ability to process various media, from audio to video, an important development as users turn to chatbots for a wider range of needs. The search company will also use the algorithms to power products such as Bard, its answer to ChatGPT, and mobile-phone features that are capable of running without any network connection…The most powerful Gemini Ultra version outperformed OpenAI’s technology, GPT-4, on a range of industry benchmarks, Google said.” THE WALL STREET JOURNAL
Meta, IBM Create AI Alliance to Share Technology, Reduce Risks Coalition of more than 40 organizations includes Oracle, AMD, and Stability AI
“The coalition, called the AI Alliance, will focus on the responsible development of AI technology, including safety and security tools, according to a statement Tuesday. The group also will look to increase the number of open source AI models — rather than the proprietary systems favored by some companies — develop new hardware and team up with academic researchers…Proponents of open source AI technology, which is made public by developers for others to use, see the approach as a more efficient way to cultivate the highly complex systems. Over the past few months, Meta has been releasing open source versions of its large language models, which are the foundation of AI chatbots…The recent chaos at ChatGPT-creator OpenAI, which fired and rehired its well-known chief executive officer, has intensified a global debate about how transparent companies should be in developing powerful AI technology.” BLOOMBERG
Anthropic is Poised to Benefit From the Chaos in the A.I. World Several OpenAI employees left in late 2020 to start Anthropic
“Life got interesting for Anthropic two weeks ago, when OpenAI nearly lit itself on fire. Anthropic had been operating comfortably in OpenAI’s shadow, collecting billions in investment from Amazon, Google, and others as it developed similar technology with an increased focus on safety. Then, as the chaos rolled on, companies that built their products entirely on top of OpenAI’s GPT-4 model looked for a hedge. And Anthropic was there, waiting for them. Anthropic is now in prime position to take advantage of OpenAI’s misstep, in which the nonprofit board that controls the company fired CEO Sam Altman, only for him to be rehired five days later. ” SLATE
Should OpenAI Change its Board Structure? Unique arrangement raises questions about the job of the board
“OpenAI has a new board, but its directors may still confront the same old problem. The artificial-intelligence startup’s unusual business structure that gave oversight of its for-profit business to a nonprofit board will be an unresolved issue for the new board to tackle. A popular suggested fix: Dissolve the nonprofit, say corporate and nonprofit directors, academics and lawyers. These people say that sorting out that potential conflict of interest will address other governance issues, including the board’s reporting hierarchy and the differing objectives between a nonprofit mission and corporate profit motive. Still others wonder if an area as impactful as artificial intelligence belongs in the hands of a small group of individuals.” THE WALL STREET JOURNAL
He Fired Sam Altman at OpenAI. Now He Has to Work With Him
Adam D’Angelo is the only original director remaining on the new board
“On the new board, D’Angelo is expected to help ensure the board exercises active oversight and has a direct line to employees beyond the management team, the people said…Several people who previously worked with D’Angelo described him as a principled person who slowly makes up his mind, often asking direct, piercing questions to help him arrive at a conclusion. He “doesn’t suffer fools,” one of the people said. D’Angelo has tended to keep a low profile and rarely appears in the press—a contrast to Altman, who has cultivated relationships with tech reporters over the years…Altman invited D’Angelo to join OpenAI’s board in 2018…Once on the board, D’Angelo began helping some of the other independent directors understand the quirks of being a tech startup, explaining that OpenAI’s rapid pace of development was typical for Silicon Valley, people familiar with the matter said…” THE WALL STREET JOURNAL
Former Wells Fargo CEO Sues Bank for $34M in Stock Options and Back Pay
Tim Sloan resigned in 2019 amid scandal over unauthorized consumer accounts
“Former Wells Fargo & Co (WFC.N) CEO Tim Sloan filed a lawsuit on Friday accusing the bank of failing to pay him more than $34 million after he resigned in 2019 amid a wide-ranging sales practices scandal. Sloan in the lawsuit filed in California state court says Wells Fargo canceled stock awards and withheld a bonus he had earned before stepping down. Wells Fargo in a statement said that ‘compensation decisions are based on performance, and we stand by our decisions in this matter.’” REUTERS
How to Build an Advisory Board That Drives Startup Success
Here's what startup founders must consider when crafting an advisory board
“As a startup founder, the composition of your advisory board can significantly impact the trajectory of your company. The recent news of Sam Altman being pushed out by his board serves as a pivotal reminder of this reality. This incident underscores the crucial need to carefully consider the structure of your advisory board from the early stages of your startup. It's a scenario that extends beyond mere corporate governance; it embodies the very essence of how a startup's future can be influenced, and potentially redirected, by its board.” ENTREPRENEUR
SEC’s Chief Accountant on What Worries Him
Paul Munter on risk assessment, the importance of cash flow, and more
“WSJ: What are the primary emerging areas of concern for you as you observe companies and auditors doing their work? Munter: One is the risk assessment process, both from an issuer and auditor perspective, to make sure that it is robust and comprehensive. You can’t think about that separate and apart from financial reporting because obviously financial reporting is trying to inform investors about your business and what the risks are and obviously what the financial consequences of the business are. And that auditors are thinking about the company’s risk assessment process, how robust, how comprehensive it is. The degree to which it is integrated with financial reporting so that auditors are making an appropriate assessment of areas of potential risk for material misstatement in the financial statements. As you’re identifying risk, that should inform your financial reporting and auditing process.” THE WALL STREET JOURNAL
|
|