3 Comments
User's avatar
John Oakley's avatar

I agree with Lord Hunt that we must have a moratorium on the development of all future AI features. Otherwise, we will find ourselves in a world beyond our present chaos, a world that doesn't even understand the technology we already use, but writ large. How can we propagate systems like Horizon, CoOp, and Jaguar Land Rover across systems that we should understand how they work, without developing guardrails to avoid catastrophic runaway?

And those guardrails should have the tools to make as well as practices and procedures to examine and test them.

I've been working on AI since the first commercial iteration ( which we called Expert Systems) in the '70s, and we in the AI world have been warning that this would happen.

David C's avatar

Even if desirable, it’s increasingly not feasible. Current capital accrued and now pouring in to deliver compute and power is already baking-in a next-generation AI capability focused on leaping from here to AGI plus and on to ASI. Stopping that progress would make that capital stack moribund, blow an enormous financial hole in the side of the hyperscalers, labs and credit providers. I think this horse has bolted.

Nathan Metzger's avatar

A global moratorium is the only way we'll make it through this. We need scientific consensus on whether powerful AI systems can be created safely! Right now, we do not have that. Why would we build something to undo ourselves?