Making Sense // The Trouble with AI - ft. Gary Marcus

Sam Harris speaks with Stuart Russell and Gary Marcus about recent developments in artificial intelligence and the long-term risks of producing artificial general intelligence (AGI).

Making Sense // The Trouble with AI - ft. Gary Marcus

Key Points

  • The value of expertise is unstable and requires a personal ethic of not pretending to know things one doesn't know.
  • Asking unqualified individuals for their opinion on important topics undermines the necessary work of making institutions trustworthy.
  • Gary Marcus is concerned about the lack of control and regulation surrounding AI, even with the current approximative intelligence we have.
  • Stuart Russell provides an example of how the limitations of current AI systems demonstrate the potential risks and negative consequences of achieving AGI.
  • The conversation discusses the limitations of data-driven AI systems in terms of their ability to generalize beyond training data and recognize complex concepts such as groups in Go, resulting in vulnerabilities like adversarial attacks and failures in other tasks like arithmetic and chess.
  • The speakers advocate for a more traditional engineering approach to building AI systems that involve understanding the underlying pieces and how they work together, in order to ensure alignment and control over the system's behavior.
  • Narrow AI is developed for one specific task, while AGI or Artificial General Intelligence can quickly learn to be competent in pretty much any kind of task, and Artificial superintelligence or ASI would mean systems that are far superior to humans in all these aspects.
  • While narrow AI seems safe because it is developed for one specific task, it poses risk for humans as we implement it as the risk of failure modes of AI cannot be known in advance due to the lack of common sense and more general view of the problem it is solving. Moreover, the dominant paradigm of deep learning often comes with the gaps which can be unpredictable that make it hard for humans to reason about what the system's going to do and has safety consequences.
  • The conversation covers two main issues: failures of narrow AI, where it either fails to do what it's supposed to, or is applied in bad ways; and the potential dangers of ChatGPT and other similar technologies that can easily produce misinformation.
  • The speakers discuss their concerns with the current reliance on deep learning techniques, as these systems often lack deep, interpretable, and validated representation of the world, and they may not lead to the promised land of AGI.
  • The conversation discusses the risks of building more reliable AI, such as probabilistic programming, which is better at brainwashing people and may pose more risks.
  • There is a debate around whether AGI can be controlled if it becomes much more intelligent and powerful than humans, and if we can agree on a set of consensus values to program into the system.
  • There is concern over narrow AI becoming more powerful and the negative effects it can have on society, particularly in regards to information space and social media.
  • The solution requires institutional and regulatory changes to combat disinformation and deep fakes, as well as a shift away from the current business model of gaming people's attention with misinformation.
  • The metaverse can create fake friends that are more effective and insidious than traditional advertising, but the European Union's AI Act has a strict ban on impersonating human beings, which will be important for human freedom in the coming decades.
  • The concern about AGI alignment and the control problem is not about machines spontaneously becoming evil, but rather a matter of what mismatches in competence and power can produce in the absence of perfect alignment between human goals and AI goals. Brilliant AI researchers who downplay the concern may be exhibiting motivated cognition and a self-defense mechanism that makes the problem even more worrying.

Episode Description

Sam Harris speaks with Stuart Russell and Gary Marcus about recent developments in artificial intelligence and the long-term risks of producing artificial general intelligence (AGI). They discuss the limitations of Deep Learning, the surprising power of narrow AI, ChatGPT, a possible misinformation apocalypse, the problem of instantiating human values, the business model of the Internet, the meta-verse, digital provenance, using AI to control AI, the control problem, emergent goals, locking down core values, programming uncertainty about human values into AGI, the prospects of slowing or stopping AI progress, and other topics. If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe. Learning how to train your mind is the single greatest investment you can make in life. That’s why Sam Harris created the Waking Up app. From rational mindfulness practice to lessons on some of life’s most important topics, join Sam as he demystifies the practice of meditation and explores the theory behind it.

Subscribe to ssv.ai

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe