Discussion about this post

User's avatar
Alexei Gannon's avatar

If you're looking for leftists who seriously consider AI, I would humbly suggest some of my own writing on the necessity of a positive socialist vision of AI

(https://open.substack.com/pub/onethousandmeans/p/the-left-must-plan-for-ai?utm_campaign=post-expanded-share&utm_medium=web)

and how "left-intellectuals" like Bender or Timnit sell left-facing pseudoscience.

(https://open.substack.com/pub/onethousandmeans/p/the-ai-denial-industry-the-lefts?utm_campaign=post-expanded-share&utm_medium=web).

R Pi's avatar

Well said. I don't think this is a left or right issue. It's simply an outgrowth of widespread ignorance among the public, not unlike ignorance about how a microwave or printer works: a lack of curiosity about the world and a willingness to accept the simplest explanation to avoid thinking about it.

As you very accurately point out, the temptation to dismiss what LLMs do by focusing only on the task (next-token prediction) while ignoring the goal (inducing internal representations that in aggregate constitute a world model) is just too strong for most people to resist. Most people don't have the knowledge to even begin questioning why the internal representations in LLMs serve the same purpose as our own internal representations, or why theories of the brain such as predictive coding and Bayesian models place prediction at their core. Why is it that prediction used to minimize error between the brain's model and reality is never thought of dismissively, while the same concept, implemented in the digital realm, is derided when it comes to LLMs?

It's no wonder you don't see people working at AI labs defending their creations from shallow criticism; they simply don't care, and they don't care because they're building the future.

2 more comments...

No posts

Ready for more?