"We [Google] Have No Moat, And Neither Does OpenAI"

"We [Google] Have No Moat, And Neither Does OpenAI"
Photo by Mitchell Luo / Unsplash

I just listened to the Latent Space podcast talking about the leaked Google memo about open source LLMs (via semianalysis).

The memo itself is highly interesting (and well written) but so are the rebuttals:

  • Some claimed that Luke Sernau (the author of the memo) does not seem to work in AI related parts of Google and “software engineers don’t understand moats”.
  • Emad Mostaque, himself a perma-champion of open source and open models, has repeatedly stated that “Closed models will always outperform open models” because closed models can just wrap open ones.
  • Emad has also commented on the moats he does see: “Unique usage data, Unique content, Unique talent, Unique product, Unique business model”, most of which Google does have, and OpenAI less so (though it is winning on the talent front).
  • Sam Altman famously said that “very few to no one is Silicon Valley has a moat - not even Facebook” (implying that moats don’t actually matter, and you should spend your time thinking about more important things)

The episode with Simon Willison, who also wrote a great recap of the No Moat memo, discusses the memo at length. Enjoy!

No Moat: Closed AI gets its Open Source wakeup call — ft. Simon Willison
Listen now (44 min) | Emergency Space: Live reactions to the leaked Google Moat memo, with 2700 devs listening in, ft. Simon + Travis. Plus: the Google Brain Drain, and how Python gets 3500x faster with Mojo.

Subscribe to ssv.ai

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.