Empire of AI: Dreams and Nightmares in Sam Altman’s Open AI by Karen Hao

Penguin

Review by Walter Cummins

Karen Hao’s exposé of the tensions and reversals at the multi-billion company OpenAI reads like a satire of organizational follies, a topic often mocked in sitcoms. But in this book the parts are played by actual people being themselves, not actors. And these real people are some of the smartest around, genuine brainiacs. It would be easy for readers to laugh if the company product were something like potato chips. But these people are seeking to achieve AGI (Artificial General Intelligence) that can duplicate human cognition in learning, reasoning, and problem solving. OpenAI already has millions using its ChatGPT, a seemingly important step toward actual AGI, a technology believers claim will make the world a paradise, with cancer cured and everyone affluent, but that some doubters fear unleashed will bring about the downfall of human civilization.

Hao sees this as the crucial issue: “The future of AI—the shape that this technology takes—is inextricably tied to our future. The question of how to govern AI, then, is really a question about how to ensure we make our future better, not worse.”

As her book makes clear, she comes to the conclusion that OpenAI, a leader in the AGI quest, embodies the problem, far from the solution. At its creation the company proclaimed an idealistic identity as a nonprofit that would share information. But Hao, a journalist with deep knowledge about OpenAI’s dynamics, concludes that since its origin the company became everything it said it would not be—“competitive, secretive, and insular, even fearful of the outside world under the intoxicating power of controlling such a paramount technology.” No longer nonprofit, it turned to aggressive marketing to achieve great valuation.

With so much at stake, the company’s leaders, once collaborators, became rivals, plotting behind the scenes for control of the agenda, at one point even firing the founding CEO Sam Altman, but within two days restoring him after realizing the reaction within and outside the company. Board members reversed their votes and some were ousted. Hao just presents the shifting facts of the situation and lets the mockery speak for itself. She does conclude that the failure of the governance “illustrated in the clearest terms just how much a power struggle among a tiny handful of Silicon Valley elites is shaping the future of AI.” And the personalities and ideologies of these people are fundamental in shaping the technologies that emerge.                                                                                         

Before Hao begins to present her findings she seeks to establish her credibility by referring to the research breadth of 300 interviews, 150 of these with current and former OpenAI employees, and access to what she calls a trove of documents and correspondence, as well as the work of a team of factcheckers. She does note that Sam Altman and OpenAI did not cooperate.

Beyond the complications and confusions of this particular company and their implications, she states a serious concern about the impact of the AI industry on the people of the world and the world’s economic structure. She calls this “an alarming direction.” At the heart of AI’s future all the control will lie with companies like OpenAI and its rivals that together make up dominant empires: “Since ChatGPT, the six largest tech giants together have seen their market caps increase $8 trillion.”

A major ethical problem within the industry can be focused on the division between what was called in OpenAI the Boomers and the Doomers. The Boomers push for quicker release of new products in hopes of the resulting investments and profits. The Doomers, those concerned with the potential destructive misuse and outcomes of the product, want more time devoted to screening for dangers. It may come as no surprise that the Boomers dominate, shrinking the review period for scrutiny so that the product could be released to be market as soon as possible and ahead of rival products.

The social and environmental costs Hao investigates are not directly part of AI technology but cause consequences of the industry for workers and for the environment. While AI has been gathering texts and images from countless sources, often resulting in copyright violation lawsuits, much of what is swept up is negative to AI users depending on Large Language Models (LLMs), using a technology of transformers that process billions of words to identify patterns and generate coherent text. The problem is that many of these word patterns present dangerous content, such as child pornography, racist, sexist, and other offensive statements.

To screen and remove such language, companies like OpenAI hire workers through subordinate firms to delete such content. Many of these workers who live in poor countries like Kenya and Venezuela are paid a pittance to reduce costs to a minimum; they spend hours and hours a day for survival earnings. In addition, many of the workers are so disturbed by the offensiveness of what they have to read that they suffer severe psychological damage. Hoa calls this situation and exploitation an “acceleration toward [a] modern-day colonial world order.”

This she sees as a rebirth of the abuses of an earlier time of competing world empires: “During the long era of European colonialism, empires seized and extracted resources that were not their own and exploited the labor of the people they subjugated to mine, cultivate, and refine those resources for the empires’ enrichment.”

Beyond this exploitation of people, our habitat and its resources in multiple countries have become expendable in the exponential need for more and more power generation for AI facilities, eating up thousands of acres of land and consuming massive amounts of potable water to cool processing machinery. In addition, the cost of creating these facilities adds up to many billions: “By 2030, at the current pace of growth, data centers are projected to use 8 percent of the country’s power, compared with 3 percent in 2022; AI computing globally could use more energy than all of India, the world’s third-largest electricity consumer.”

It’s clear that Karen Hao is skeptical about and even hostile to AI developments in light of the fallibilities of the highly compensated Silicon Valley leaders, the starved workers at the bottom, the despoiling of land and water, and the vague uncertainty that AGI will ever be achieved or worth the costs considering its potential of destructive effects on human society. Her suggested alternative to the present system is certainly idealistic—a collection of small-scale democratic community projects conducted throughout the world: “Models can be small and task specific, their training data contained and knowable, ridding the incentives for widespread exploitative and psychologically harmful labor practices and the all-consuming extractivism of producing and running massive supercomputers.” But would the billionaires of OpenAI and its rivals allow that to happen?