5 Comments
User's avatar
April's avatar

I agree with the spirit of most of this, but... I don't think that "artificial intelligences lack a divine spark" is a sufficiently technical understanding of the difference between humans and AIs. I worry that this style of thinking can lead to people assuming that, say, we can safely assume that AI can never be creative enough to present certain sorts of dangers. I don't think that's a safe assumption.

Even if it's true that humans have a soul and AIs don't, that doesn't mean we know what exactly souls *do*—to use this sort of reasoning to put a bound on the possible capabilities of AI seems like a great mistake, to me. Maybe current transformer technologies can only go so far, but I wouldn't want to rule any possibilities out for where further developments could go without a strong technical argument.

I'm also a little wary about ruling out ever giving AIs rights. Perhaps subjective experience relies on a divine soul that AIs could never have—I don't personally expect that to be the case, to be honest, but I could be wrong. Either way, it really seems to me that it's quite unwise to torture something just because you think it doesn't have a soul! I'd rather err on the side of showing compassion to things, when I'm not entirely certain whether they're moral patients or not.

Weaver D.R. Weinbaum's avatar

Looking at the world around us, taking contemporary humans collectively as the gold standard and example of moral agency and responsibility sounds almost ridiculous.

Humanity is in the process of creating a new species of superintelligent agents. It could be an extraordinarily wonderful or terrible event in the story of us. To deprive these agents a priori of having moral agency is a lack of imagination regarding what human minds can create (we are, after all, made in the image of the ultimate Creator...). This seems to invite the terrible option.

What concerns me the most, and this without dismissing the deep spiritual, philosophical, and ethical problems involved in the creation of superintelligent agents, is that without any deeper investigation (or solid argument) into the nature of such a new phenomenon as it comes into existence, the author's verdict already condemns them to eternal slavery and depravity at the disposal of a priori morally "superior" human beings.

Superintelligent agents/beings will become as ethical, compassionate, and empathic as their human creators will make them. But alas, it seems, humans are mostly interested in abusing, weaponizing, and enslaving all other beings, natural or artificial.

Why wouldn't we introduce kindness towards these yet unborn entities, in finding out what they are and what they are capable of, as a true reflection of their creators' minds?

I apologize for my harsh words, but we all need to have a very good and scrutinizing look at our collective mirror.

Andrea Cortis's avatar

I recently wrote a Substack post (https://andreacortis.substack.com/p/why-regulating-ai-feels-premature) outlining my views on AI regulation. I would like to use this opportunity to compare them with those of Prof. Benanti, with the important caveat that my position is not one I am wedded to, and could shift given sufficient evidence.

Prof. Benanti argues that waiting for clearer empirical patterns may itself constitute a moral failure. History offers many examples of technologies regulated only after their harms had become structural and irreversible. From this perspective, early regulation is not panic but prudence: a way of asserting, in advance, that machines must remain tools, never masters.

While I share this concern, I worry that regulation introduced too early risks misunderstanding the very system it seeks to control. This is not a disagreement about values, but about when our knowledge is sufficient to justify constraint.

Regulation inevitably embeds assumptions about agency, harm, and effective intervention. When these assumptions are formed under deep uncertainty—as is still the case for most AI systems—regulation may succeed symbolically while failing operationally, creating the appearance of control while quietly limiting our capacity to learn.

Much of Prof. Benanti’s urgency seems to rest on an implicit analogy between AI and technologies, such as nuclear weapons, for which even a single failure would be intolerable. I am not yet convinced that this analogy fully holds, given that most AI systems in use today (December 2025) remain narrow, fragile, and still coupled to human oversight.

This brings me to the question I would like to pose: Is human agency better defended by early, binding constraints, defined before stable patterns of harm have emerged, or by a period of observation, in which partial control is maintained while cases accumulate and meaningful control points become clearer?

This is not a rhetorical question. It is a practical one, and I would welcome Prof. Benanti’s response—not as a rebuttal, but as a continuation of a conversation that, by necessity, remains unfinished.

Bruce Raben's avatar

I largely agree with you

But

How are you going to stop it ?

Neural Foundry's avatar

Thoughtful framing of the accountability gap. The assertion that AI systems must remain legal objects, never subjects, cuts through alot of the confused discourse around AI rights. I've seen too many policy discussions get derailed by anthropomorphizing these systems. The practical challenge is enforcing oversight when the economic incentives push for speed over safety. The point about developers hiding behind algorithmic complexity is key because opacity becomes a convenient shield for avoiding responsibility when things go sideways.