Happy Friday! 👋
I’ve been sitting with two things for the past few days.
One is Satya Nadella’s year-end letter. The other is a New York Times piece about why so many Americans feel uneasy — even hostile — toward AI.
Read separately, they’re interesting. Read together, they explain each other.
One is written from the top of the system. The other is written from inside it. Together, they capture the moment we’re in.
Let’s get into it.

Driving the news: In a year-end blog post titled Looking Ahead to 2026, Satya Nadella, CEO of Microsoft, takes a notably restrained approach to artificial intelligence. There are no sweeping promises or timelines, and very little focus on competition. Instead, the letter lingers on what happens after the technology works — how AI is deployed, how it fits into existing institutions, and how easily confidence in these systems can erode once they become part of everyday life.
The subtext is hard to miss. AI capability has moved ahead of society’s ability to absorb it. The harder problems now sit outside the models themselves, in trust, governance, and the practical realities of using these tools in environments that weren’t designed for them. Nadella isn’t arguing that progress should stop, but that progress without integration creates its own kind of risk.
The stakes: What’s at risk isn’t whether AI improves. It’s whether people continue to accept it. That concern shows up clearly in recent New York Times reporting on public discomfort with AI. Many people don’t describe fear so much as fatigue — a sense that AI has arrived everywhere at once, often without explanation or consent. It’s at work, in school, and embedded in services people rely on, even as accountability remains fuzzy.
Nadella’s letter reads like a response to that mood. If trust erodes early, it doesn’t come back easily. Systems that feel imposed rather than helpful invite resistance, not adoption. Once people disengage, progress stops compounding and starts fragmenting.
The friction: There’s an unavoidable timing problem at the center of this moment. AI advanced quickly. Institutions didn’t.
Companies feel pressure to deploy, governments feel pressure to react, and individuals are left adjusting in real time. In that environment, restraint can look like hesitation, even when it’s the more durable choice. Nadella’s emphasis on responsibility and system design pushes against an ecosystem that still rewards speed and spectacle.
The tension is that slowing down now may be the only way to avoid a harder stop later.
What this reveals: Taken together, Nadella’s letter and the public reaction captured by the Times point to the same reality: the limiting factor for AI is no longer technical progress. It’s social readiness. Systems can improve faster than people are willing to accept them, and once that gap opens, it shapes everything that follows.
The question now isn’t how quickly AI can advance, but how carefully it’s introduced — and whether the people affected feel they have any agency in the process.
The bigger picture: Nadella isn’t offering a grand vision of the future. He’s describing a narrowing window in the present.
AI doesn’t fail because it stops working. It fails when people decide it doesn’t have their interests in mind. The public sentiment reflected in the Times reporting suggests that gap is already forming. Nadella’s letter is a signal that some of the people closest to the system see it too.
For everything else, see below 👇:
AI
AI’s Most Important Benchmark in 2026? Trust
Why confidence, not raw capability, may decide which AI companies win next. — (Mark Sullivan for Fast Company) — Link
Parents Named Their Baby ChatGPT
A viral story highlights how AI culture keeps bleeding into real life. — (Victor Tangermann for Futurism) — Link
People Are Starting to Treat Chatbots Like Romantic Partners
AI companionship raises new questions about intimacy, dependency, and emotional boundaries. — (The Atlantic staff for The Atlantic) — Link
Schools Are Turning to AI Tools Like ChatGPT in Estonia and Iceland
Small countries are testing what large-scale AI adoption in classrooms can look like. — (Natasha Singer for The New York Times) — Link
Business
In 2026, Corporate Purpose Faces a Fork in the Road
Companies will have to decide whether purpose is operational—or just branding. — (Carol Cone for Fast Company) — Link
Entertainment
Hollywood Ended 2025 With Higher Ticket Sales Than Expected
A late-year slate helped stabilize box office results despite long-term headwinds. — (Brooks Barnes for The New York Times) — Link
The ‘Stranger Things’ Finale Is Already Acting Like a Box Office Event
Netflix’s biggest show is blurring the line between streaming and theatrical releases. — (Bethy Squires for Vulture) — Link
Adver
Media
TV Advertising Is Booming Again—Just Not Where Hollywood Expected
Sports, live events, and reality programming are driving growth. — (Alex Weprin for The Hollywood Reporter) — Link
Reed Hastings on His Utah Art Park and Ski Resort
The Netflix co-founder talks about wealth, legacy, and building cultural monuments. — (Peter Hamby for Puck) — Link
Thanks for reading! Enjoyed this edition? Share it with a friend or colleague!
Was this forwarded to you? Sign up here to receive future editions directly in your inbox.
Support the Newsletter: If you’d like to support my work, consider contributing via Buy Me a Coffee.
Work with Me: Interested in partnering with me on sponsored content, consulting/advising, or speaking and workshops? Get in touch here.


