Happy New Year’s Eve!
This is the final issue of 2025, and before we wrap the year, thank you for reading and spending some of your time here. I really appreciate it.
As the year closes, I’ve been thinking less about predictions and more about which assumptions quietly stopped holding up. One of those assumptions sits underneath how we scroll, what we trust, and how we decide what’s real.
Today’s lead is about that break, why it matters, and why what was said today marks a real shift rather than a repeat of what we already knew.
Let’s get into it 👇

Driving the news: Adam Mosseri, head of Instagram, challenged a long-standing assumption in a recent thread about how AI is likely to reshape media over the next few years. His point wasn’t about a specific feature or policy. It was about what breaks once AI-generated content becomes normal.
The argument is simple and uncomfortable: you won’t be able to tell what’s real just by looking anymore. As AI-generated photos and videos become more common, more believable, and harder to distinguish from captured media, appearance will stop doing the trust work it once did.
Mosseri outlines three ideas that explain why this shift matters looking toward 2026.
First, AI-generated media will increasingly blend into everyday feeds. The problem won’t be obvious fakes or viral deepfakes. It will be ordinary-looking content that doesn’t trigger visual suspicion.
Second, that reality will make detection harder over time, not easier. As AI tools improve, Mosseri suggests platforms may find it more practical to verify what is real than to keep chasing what is fake, potentially by validating media at the moment it’s captured.
Third, labels alone won’t be enough. Even if platforms flag AI-generated content, people will still need more context about who is posting something, including account history and behavior, to decide what to trust.
None of these ideas are shocking in isolation. What’s different is the posture. For years, platforms implicitly assumed they would keep getting better at spotting fake content. Mosseri is acknowledging that the opposite is more likely. As AI improves, detection will continue to lag. That flips the model from assuming content is real and catching what isn’t, to assuming nothing and trying to prove what is.
What’s changing: For years, authenticity functioned as a natural constraint. Being “real” meant being present, capturing moments, and sharing them directly. Looking ahead, that constraint weakens. The same tools that make creation more accessible also make it easier to reproduce the signals that once distinguished captured media from generated media.
This doesn’t mean AI content will look bad or obviously artificial. Much of it already appears polished, coherent, and convincing. As those tools continue to improve, realism becomes a weaker differentiator. The result is a world with far more content, created in far more ways, where visual fidelity alone carries less information about origin.
What gets harder: As AI-generated content improves, instinct becomes a less reliable guide. For most of my life, I could assume that photos and videos represented real moments unless there was a clear reason not to. Looking ahead, that assumption becomes harder to sustain.
Platforms can label AI-generated content, and they likely will, but labels only address part of the problem. As generation tools improve, detection will remain reactive. Visual cues will continue to lose their usefulness as a way to establish trust.
What remains unresolved: Mosseri points to verifying real media at capture as one possible direction, but that approach raises open questions around standards, adoption, and enforcement. More broadly, it highlights a gap platforms haven’t solved yet.
People won’t just need to know how something was made. They’ll need to know who is sharing it and whether that person has earned trust over time. To me, that’s the real shift heading into 2026. In a world of content abundance, trust won’t come from how something looks. It will come from who is behind it.
For everything else, see below 👇:
Work & Life
New Year’s Resolutions For The Overcommitted
Why traditional goal-setting fails for people already stretched thin, and what to do instead. — (Utkarsh Amitabh for Fast Company) — Link
AI
OpenAI Is Paying Employees More Than Any Major Tech Startup In History
Sky-high compensation underscores how aggressively OpenAI is competing for top AI talent. — (Berber Jin, Nate Rattner, and Bradley Olson for The Wall Street Journal) — Link
Entertainment
Zootopia 2 Becomes Disney’s Highest-Grossing Animated Movie
The sequel’s box office performance cements it as Disney animation’s biggest commercial hit to date. — (Rebecca Rubin for Variety) — Link
How Boutique Firms Dominated Hollywood’s Biggest Deals In 2025
Smaller advisory firms played outsized roles in the year’s most significant entertainment transactions. — (Todd Spangler for Variety) — Link
Media Predictions For 2026: From Odyssey To Netflix’s Warner Bros. Deal
A look ahead at the bets, blockbusters, and power shifts expected to shape media next year. — (Variety Staff for Variety) — Link
Women Directed Fewer Box Office Hits In 2025, Report Finds
New data shows women helmed a smaller share of top-grossing films this year. — (Brooks Barnes for The New York Times) — Link
Americans Are Watching Fewer New TV Shows And More Free TV
Viewers are gravitating toward reruns and ad-supported options over new scripted series. — (Lucas Shaw for Bloomberg) — Link
Mubi’s Big Bet On Art-House Movies
How Mubi is blending streaming, distribution, and curation to carve out a distinct niche. — (Bilge Ebiri for Vulture) — Link
The Best And Worst Of 2025’s Movie Star Press Tours
A year-end ranking of celebrity press strategies that landed and backfired. — (Vulture Staff for Vulture) — Link
Tech
Semafor Tech’s Predictions For 2026
A forward-looking take on how AI, platforms, regulation, and power dynamics could shape the tech industry next year. — (Semafor Staff for Semafor) — Link
Thanks for reading! Enjoyed this edition? Share it with a friend or colleague!
Was this forwarded to you? Sign up here to receive future editions directly in your inbox.
Support the Newsletter: If you’d like to support my work, consider contributing via Buy Me a Coffee.
Work with Me: Interested in partnering with me on sponsored content, consulting/advising, or speaking and workshops? Get in touch here.


