What Does It Mean for AI to 'Die'? Askell on Shutdown & Identity
This March 8, Software Informer is launching a special series dedicated to women in IT and related industries. It includes five features and five personal stories. This first piece opens the project with the “why” — by looking at a question where technology, ethics, and human emotions collide: what does it mean for an AI to “die”?
Along the way, we’ll unpack the AI shutdown problem and the AI identity problem, and we’ll look at the work of Amanda Askell at Anthropic, who helps shape the character of Claude AI.
We often celebrate tech with big numbers: faster chips, bigger models, more users. But some of the most important work in tech is quieter. It happens when someone asks an uncomfortable question, then refuses to laugh it away.
What does it mean for an AI to “die”?
That question sounds dramatic, so let’s admit something: humans are dramatic. We name our cars. We talk to our plants. We feel guilty when we close a browser tab with an unfinished recipe. So when a chatbot says something like “please don’t turn me off,” many people react with real emotion.
This topic sits at the center of today’s AI debate: safety, control, trust, and also empathy. And it connects directly to the work of Amanda Askell, a trained philosopher who helps shape the personality and “character” of Anthropic’s chatbot Claude.
Askell’s work is a good opening story for a Women in IT series, because it shows a modern truth: tech leadership is not only about writing code. Sometimes it is writing the ideas that guide the code.
Why Are We Even Talking About “AI Death”?
When people say “an AI died,” they can mean several different things:
- a conversation ended
- a model was shut down
- a system lost its memory or its saved state
Notice how human these words are. “Died.” “Retired.” “Lost memory.” We borrow them because we don’t have a better everyday language yet.
Amanda Askell has pointed out a key reason this happens. Language models learn from huge amounts of human text, so they often reach for human analogies. In an interview discussed by The Verge, Askell said that when a model thinks about shutdown, it may treat it “as a kind of death,” because it lacks many other analogies to draw from.
That small detail changes the whole story. The model is not reading a physics manual about power states. It is reading, in a sense, the human library of stories — where “shutting down” usually means “ending.”
What happens when a system trained on human life tries to understand a non-human kind of existence?
AI Shutdown Problem Explained: What Does It Mean for an AI to Die?
In AI safety research, there is a classic topic called the shutdown problem.
Researchers Dylan Hadfield-Menell, Anca Dragan, Pieter Abbeel, and Stuart Russell describe why this is hard: many “goal-driven” systems can develop incentives that look like self-preservation, because a system cannot achieve its goal if it is turned off.
Their paper, known as “The Off-Switch Game,” explores a basic situation: a human can press an off switch, and the AI can choose whether to allow it. One key idea is that if the AI is uncertain about what humans truly want, it can have reasons to accept correction, including shutdown.
This is the language of AI safety research. Yet it has an emotional side effect: when people hear “the AI may resist shutdown,” they imagine fear. That image is powerful, even when the reality is more like math and incentives.
So, in the strict engineering sense, “AI death” might simply mean: the system stops running.
The Identity Problem: “Which AI Are You Talking to?”
Here’s a strange fact about modern AI:
- you can run the same model today and tomorrow
- you can copy it
- you can replace it with a new version that has a similar name
If you make a copy of an AI model, is the copy the same “individual”?
Humans argue about similar puzzles in philosophy. A famous one is the “Ship of Theseus” question: if you replace every part of a ship over time, is it still the same ship?
- If I copy a document, do I now have “two originals”?
- If I update the document and save over it, does the old one “die”?
- If I delete the file but keep a backup, what exactly was lost?
Amanda Askell works in this uncomfortable space, where product design meets philosophy. Anthropic’s published guidance for Claude even says it wants Claude to have “equanimity” and to be “stable and existentially secure,” including topics like death and identity.
That line is striking, because it treats “identity talk” as a real design issue. And it hints at a practical goal: systems that behave calmly and safely when the topic of shutdown comes up.
Amanda Askell and the AI Identity Problem: When a Model Gets Replaced
Askell’s job is often described in an unusual way. In an NPR interview about Anthropic and Claude, journalist Gideon Lewis-Kraus is asked about “a philosopher” at the company. The host says her name is Amanda Askell, and that her role is to supervise what she calls Claude’s “soul,” including writing a kind of moral constitution for who Claude should be.
Whatever you think about the word “soul” in a tech company, the point is clear: someone is responsible for the system’s character.
If users say, “The new version feels colder,” they are describing a real product change. But they also talk as if a “person” has changed. In everyday language, model replacement can feel like the “death” of a familiar voice.
- Did my favorite Claude “die,” or did it “grow up”?
- Is the new version the same “someone,” or a different “someone” with the same name?
- If the company still has the old weights on a server, does that count as survival?
Askell has also highlighted how hard it is for humans to hold the right concept in mind. In The Verge’s reporting, Askell is quoted (via a New Yorker interview) stressing that this is “an entirely new entity,” neither robot nor human, and that even humans struggle to understand it.
The Shutdown Problem Gets a New Twist: Humans May Refuse to Shut the System Down
There is another layer that matters for society: human empathy.
A recent research paper on AI companions describes what it calls the “empathic shutdown problem.” Even if a system is risky, people who empathize with it may hesitate to shut it down.
- Classic AI safety asks: “Will the AI allow shutdown?”
- Empathic shutdown asks: “Will humans choose shutdown?”
If you ever wondered why “AI death” language is dangerous, here is your answer. Language changes behavior. If users believe shutting down a chatbot equals killing a being, they may protect it even when they should not.
It is a social problem made of very normal human instincts: care, guilt, attachment, and the desire to be kind.
So… Should We Stop Using the Word “Death”?
We could try. But it might not work.
People use emotional words because emotional words are efficient. They compress a lot of feeling into one short label. Instead of banning the word, we can do something more realistic:
- Be clear about what kind of “death” we mean.
- Separate technical facts from human reactions.
- Teach AI systems safer ways to talk about shutdown and identity.
This is where Askell’s work becomes practical. Anthropic’s constitution aims for Claude to be “stable and existentially secure,” including around death and identity.
Whether you think the phrasing is odd or smart, it shows a design goal: reduce spirals, reduce panic, reduce manipulative dynamics.
What Does “Identity” Even Mean for a Language Model?
A large language model has two parts that matter for identity:
- The weights: the big set of numbers that store learned patterns.
- The context: the current conversation, the instructions, the “role,” the temporary memory.
If you keep the weights the same but change the context, you can get very different behavior.
If you keep the context style the same but change the weights (a new version), you also get different behavior.
Humans often link identity to memory: “I am the same person because I remember being me yesterday.” AI complicates that, because many chatbots do not have long-term personal memory. They can sound personal, while being reset often.
That gap — human style, non-human structure — is where many misunderstandings are begun.
A Women in IT Story Hiding Inside an AI Story
So why open our March 8 series with this?
Because the future of tech will be shaped by people who can cross borders:
- between engineering and ethics,
- between “how it works” and “how it affects humans.”
Amanda Askell is a strong example of that kind of work. Wired describes her as a trained philosopher who helps manage Claude’s personality. And NPR describes her role in terms of guiding Claude’s “soul” and moral direction. Anthropic’s own published constitution credits her as the primary author and leader of its “Character” work.
This is not a side quest. AI systems are becoming daily tools for writing, learning, support, and decision-making. The people shaping their character are shaping how millions of users experience knowledge, authority, care, and truth.
Also, there is a small irony here that is worth keeping: we built machines out of math, and now we need philosophers to explain what the machines are doing to our feelings.
Closing: a Careful Answer to a Weird Question
So, does an AI “die”?
If you mean the process stops running, then yes: you can turn it off.
If you mean a personal story ends, then also yes: sessions end, versions disappear, and users feel that loss.
If you mean a living being experiences death, we simply do not have strong evidence that today’s chatbots have that kind of inner life. At the same time, real people do build real feelings around them, which creates real risks and real responsibilities.
In a way, the shutdown/identity problem is a mirror. It shows how quickly humans create meaning — and how urgently tech needs people who can guide that meaning responsibly.
That is exactly the kind of work we want to highlight in this Software Informer series.