Latent Space // No Moat: Closed AI gets its Open Source wakeup call — ft. Simon Willison

It’s now almost 6 months since Google declared Code Red, and the results — Jeff Dean’s recap of 2022 achievements and a mass exodus of the top research talent that contributed to it in January.

Latent Space // No Moat: Closed AI gets its Open Source wakeup call — ft. Simon Willison

Key Points

  • The open source community is outrunning Google and OpenAI in the development of language models, which raises concerns for Google's competitiveness.
  • It is suggested that smaller models with stackable fine-tuning innovations may move faster and be more useful than giant models that need to be retrained every few months from scratch.
  • The conversation explores the challenges of building better data for startups and the potential of open-source models like Red Pajama to democratize access to data and enable fast iteration on base models.\n- Mojo, a new programming language, is introduced and discussed as a potential solution for hyper-optimized, on-device language models in industries where API calls aren't feasible, like fighter jets.
  • Mojo may be the biggest programming language advance in decades, according to Jeremy Howard.
  • Facebook may release the LLAMA weights officially, putting them at the lead of open source innovation around AI.
  • The conversation discusses the potential harm that language models can cause in the form of automated scams, and emphasizes the importance of focusing on the real-world harm caused by these models, rather than science fiction scenarios.
  • The leaked memo from Google suggests that Google should embrace open source more fully, as OpenAI's closed policy will make it difficult for them to maintain an edge in the market with their commoditized models.

Episode Description

It’s now almost 6 months since Google declared Code Red, and the results — Jeff Dean’s recap of 2022 achievements and a mass exodus of the top research talent that contributed to it in January, Bard’s rushed launch in Feb, a slick video showing Google Workspace AI features and confusing doubly linked blogposts about PaLM API in March, and merging Google Brain and DeepMind in April — have not been inspiring.

Google’s internal panic is in full display now with the surfacing of a well written memo, written by software engineer Luke Sernau written in early April, revealing internal distress not seen since Steve Yegge’s infamous Google Platforms Rant. Similar to 2011, the company’s response to an external challenge has been to mobilize the entire company to go all-in on a (from the outside) vague vision.

Google’s misfortunes are well understood by now, but the last paragraph of the memo: “We have no moat, and neither does OpenAI”, was a banger of a mic drop.

Combine this with news this morning that OpenAI lost $540m last year and will need as much as $100b more funding (after the complex $10b Microsoft deal in Jan), and the memo’s assertion that both Google and OpenAI have “no moat” against the mighty open source horde have gained some credibility in the past 24 hours.

Subscribe to ssv.ai

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe