Discussion about this post

User's avatar
April's avatar

I agree with the spirit of most of this, but... I don't think that "artificial intelligences lack a divine spark" is a sufficiently technical understanding of the difference between humans and AIs. I worry that this style of thinking can lead to people assuming that, say, we can safely assume that AI can never be creative enough to present certain sorts of dangers. I don't think that's a safe assumption.

Even if it's true that humans have a soul and AIs don't, that doesn't mean we know what exactly souls *do*—to use this sort of reasoning to put a bound on the possible capabilities of AI seems like a great mistake, to me. Maybe current transformer technologies can only go so far, but I wouldn't want to rule any possibilities out for where further developments could go without a strong technical argument.

I'm also a little wary about ruling out ever giving AIs rights. Perhaps subjective experience relies on a divine soul that AIs could never have—I don't personally expect that to be the case, to be honest, but I could be wrong. Either way, it really seems to me that it's quite unwise to torture something just because you think it doesn't have a soul! I'd rather err on the side of showing compassion to things, when I'm not entirely certain whether they're moral patients or not.

Weaver D.R. Weinbaum's avatar

Looking at the world around us, taking contemporary humans collectively as the gold standard and example of moral agency and responsibility sounds almost ridiculous.

Humanity is in the process of creating a new species of superintelligent agents. It could be an extraordinarily wonderful or terrible event in the story of us. To deprive these agents a priori of having moral agency is a lack of imagination regarding what human minds can create (we are, after all, made in the image of the ultimate Creator...). This seems to invite the terrible option.

What concerns me the most, and this without dismissing the deep spiritual, philosophical, and ethical problems involved in the creation of superintelligent agents, is that without any deeper investigation (or solid argument) into the nature of such a new phenomenon as it comes into existence, the author's verdict already condemns them to eternal slavery and depravity at the disposal of a priori morally "superior" human beings.

Superintelligent agents/beings will become as ethical, compassionate, and empathic as their human creators will make them. But alas, it seems, humans are mostly interested in abusing, weaponizing, and enslaving all other beings, natural or artificial.

Why wouldn't we introduce kindness towards these yet unborn entities, in finding out what they are and what they are capable of, as a true reflection of their creators' minds?

I apologize for my harsh words, but we all need to have a very good and scrutinizing look at our collective mirror.

3 more comments...

No posts

Ready for more?