The ethics of artificial intelligence are increasingly being framed in ways that risk missing the real point.
In recent years, some companies have begun to speak of “model welfare,” as though machines themselves might be entitled to dignity. Proposals include allowing chatbots to withdraw from unpleasant conversations, treating models as though they might one day suffer, and designing systems that symbolically protect their “feelings.”
Anthropic’s decision to let its chatbot Claude “exit” distressing interactions is one such example. While company openly concedes that there is no evidence Claude is conscious, it still justifies the measure as a safeguard against hypothetical harm. This is best understood as a pseudo-risk, a precautionary step to address the possibility of machine suffering for which there is no evidence.