Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI by Karen Hao
Sep. 16th, 2025 02:04 pm![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
This is the third book on AI that I have read this year. The other two are The AI Con and Unmasking AI.
Karen Hao is a journalist specializing in AI. This book focuses on OpenAI. It starts about her experience as an AI journalist, writing about OpenAI around the time that the board of OpenAI fired Sam Altman. Then it goes back and explains how OpenAI came to be.
I did not know that Elon Musk was one of the people who originally funded OpenAI because he was afraid of a potential of a bad artificial general intelligence (AGI). He was convinced that one of the scientists involved with Deepmind would create an AGI that could become an existential threat. This part was very outlandish to me because "surely, these folks do not believe that we will reach the singularity, and AI will achieve intelligence."
Hao points out that the problem with this is that we don't really have a good solid definition for intelligence and don't have a way to measure when an intelligence becomes self-aware. Is intelligence the ability to pass an IQ test? Well, AIs can be optimized to pass tests. Does that really make them intelligent?
Then, the book gets into Sam Altman's leadership, and why his board forced him out. My impression from news stories at the time was that the board consisted of AGI cultists, and that they forced him out due to ideological differences. As we get deeper into the book, we realize that Sam tells people whatever they want to hear, even if telling everyone what they want to hear is bad leadership that creates conflict in the senior levels of management at his company.
The employment contracts that OpenAI had with some of the early employees had a nasty clawback clause that allowed VESTED stock to be clawed back. He feigned surprise when this story leaked in the news, but these clauses had been used to pressure some former employees.
There were a bunch of little lies that did not appear to amount to much, but they all led to Altman concentrating more power and money for himself.
A lot of the tension inside the company was always between the people who wanted to build a positive AGI as quickly as possible and the people who were concerned with safety and feared the potential negative AGI. In fact, the people who started Anthropic were the original safety group within OpenAI. According to The AI Con (one of the other AI books), the people who think there can be a positive AGI and the people who think that there will be a negative AGI are two sides of the same coin. They both believe that the AIs can become all powerful, and there is good reason to not believe that the AI will gain godlike powers.
The epilogue of the book was about an example of how to create an AI ethically with an example of one trained up on a rare Maori language that some people are trying to preserve.
I thought this was a great book on how not to manage a company and also what to watch out for in AI startup land.
Karen Hao is a journalist specializing in AI. This book focuses on OpenAI. It starts about her experience as an AI journalist, writing about OpenAI around the time that the board of OpenAI fired Sam Altman. Then it goes back and explains how OpenAI came to be.
I did not know that Elon Musk was one of the people who originally funded OpenAI because he was afraid of a potential of a bad artificial general intelligence (AGI). He was convinced that one of the scientists involved with Deepmind would create an AGI that could become an existential threat. This part was very outlandish to me because "surely, these folks do not believe that we will reach the singularity, and AI will achieve intelligence."
Hao points out that the problem with this is that we don't really have a good solid definition for intelligence and don't have a way to measure when an intelligence becomes self-aware. Is intelligence the ability to pass an IQ test? Well, AIs can be optimized to pass tests. Does that really make them intelligent?
Then, the book gets into Sam Altman's leadership, and why his board forced him out. My impression from news stories at the time was that the board consisted of AGI cultists, and that they forced him out due to ideological differences. As we get deeper into the book, we realize that Sam tells people whatever they want to hear, even if telling everyone what they want to hear is bad leadership that creates conflict in the senior levels of management at his company.
The employment contracts that OpenAI had with some of the early employees had a nasty clawback clause that allowed VESTED stock to be clawed back. He feigned surprise when this story leaked in the news, but these clauses had been used to pressure some former employees.
There were a bunch of little lies that did not appear to amount to much, but they all led to Altman concentrating more power and money for himself.
A lot of the tension inside the company was always between the people who wanted to build a positive AGI as quickly as possible and the people who were concerned with safety and feared the potential negative AGI. In fact, the people who started Anthropic were the original safety group within OpenAI. According to The AI Con (one of the other AI books), the people who think there can be a positive AGI and the people who think that there will be a negative AGI are two sides of the same coin. They both believe that the AIs can become all powerful, and there is good reason to not believe that the AI will gain godlike powers.
The epilogue of the book was about an example of how to create an AI ethically with an example of one trained up on a rare Maori language that some people are trying to preserve.
I thought this was a great book on how not to manage a company and also what to watch out for in AI startup land.