Two weeks sick will do something to your reading habits. You lose the bandwidth for short takes and scattered tabs and end up going deep on one thing instead.

Empire of AI by Karen Hao was that thing.

It's something we tend to ponder at times. Mostly out of curiosity, but also because I'm using these tools daily. So it was super interesting to read a thorough account of what's actually happening inside the labs.

It did complicate things. I'm still working through it.

What It Is

Hao spent years covering AI for MIT Technology Review before writing this. Empire of AI follows OpenAI from its founding as a nonprofit safety research lab through its transformation into one of the most powerful companies in the world. Sam Altman is the central figure. The November 2023 board drama (the firing, the reinstatement) takes up a good portion of the book, as does the Microsoft entanglement and the internal debate over whether speed or safety should win when the two pull in different directions.

The reporting feels thorough. It reads like someone who did the work and wasn't trying to settle a score.

The Org Dynamics Part

I kept reading it from an operations lens, probably because that's just how I process things.

At some level, the story Hao is telling is a familiar organizational one. A group starts with a clear founding mission. Capital and pressure and scale come in. Somewhere along the way, the mission stops being what drives decisions, even though the language stays exactly the same. Safety. Alignment. For the benefit of humanity.

I've seen smaller versions of this. Not remotely at this scale. But the shape is recognizable. The way the words stay consistent while the actual priorities quietly shift. The way people inside notice the gap but aren't sure how to name it.

The researchers who left over safety concerns don't come across as difficult or unreasonable. They seem like people who took the founding mission seriously and found themselves in an organization that had moved on without announcing it.

That part was uncomfortable to sit with.

Reading It as a Practitioner

This is probably the part I'm still most uncertain about.

I use these tools to build things that go into real classrooms, with real students. And reading about the pressure to ship faster, the shortcuts in safety evaluation, the race logic that makes slowing down feel impossible, it doesn't make me want to stop. The tools are genuinely useful and they're already here.

But I'd been operating on a background assumption I hadn't examined. Something like: the people building this are trying to get it right. Empire of AI makes that assumption harder to hold without some evidence.

I'm still using the tools. I'm maybe just a bit less uncritical about it than I was two weeks ago. I'm not sure what to do with that yet.

What the Book Does Well

Altman comes across as complicated. Someone navigating enormous pressure, making calls, not all of them defensible. That feels more honest than the hagiography or the takedown versions you usually get.

The board drama chapters are the most readable, probably because they have the clearest narrative shape. The bigger structural arguments about compute concentration, about who actually gets to make decisions at this scale, are harder to follow but probably more important in the long run.

Where I Landed

I finished it with a question I can't quite answer.

If the people closest to these systems, who built them, who study them full time, who have access to what's actually happening inside, can't agree on whether this is going to go well, what's the right posture for someone like me? Someone who uses them, builds on them, teaches with them?

I don't think the answer is to disengage. The tools exist and they're not going anywhere. It's probably something more like staying honest about what you're assuming, and staying willing to revise those assumptions.

Empire of AI didn't give me a clean answer to that. I don't think it was trying to. It just made the question harder to ignore.

Rating: 4/5. Dense in places. Worth it if you're actively building with these tools and haven't examined your assumptions lately.


References

  • Karen Hao, Empire of AI (2025)