Discussion about this post

User's avatar
Daniel Kalish's avatar

Love that there is media coverage of ConCon, wish we had a chance to chat in Berkeley! I agree with much here except for a few things: First, I don't agree that "Nothing will stop AI companies from cruising along the current trajectory" - a number of the attendees were staff at those AI companies and are taking these questions very seriously, most prominently featured in Anthropic's decision to allow models to exit some conversations. Second, I don't think the way this gets solved is by solving the hard problem of consciousness - it'll be a society wide moral conversation that gets affected through protests, laws, public opinion, and I don't know if there will be any easy consensus or solution. The one thing that is for certain is that things are going to get really, really weird.

Cobus Kok's avatar

Great post. Such a fascinating topic! To your point about developers asking “what do we actually DO?” and getting “more basic research” as the only answer, I wonder if the assumption that we need to “solve the hard problem” before acting ethically might be the real trap. “Is AI conscious?” assumes we know what consciousness is and just need to check if AI qualifies. We clearly don’t. Better question: what might different substrates let through that others don’t? That’s actionable without solving the hard problem and sidesteps both over-attribution (chatbots as people) and under-attribution (silicon can’t matter).

13 more comments...

No posts

Ready for more?