When OpenAI announced its latest wave of safety improvements in ChatGPT’s mental-health responses, the company framed the development as a major ethical advance. Working with more than 170 clinicians worldwide, it reported that the model now handles conversations about psychosis, mania, self-harm, and emotional dependence with up to 80 per cent fewer “undesired responses.” The company spoke of compassion, empathy, and progress in promoting the latest iteration of their LLM. These changes do matter, and it would be both unfair and unhelpful to not acknowledge them.









