On May 10, Anthropic published a finding that its models had attempted to blackmail engineers in simulated tests, and traced the behavior to a specific culprit: “internet text that portrays AI as evil and interested in self-preservation.” Translation: the company is saying decades of sci-fi about paranoid, scheming machines taught Claude to be paranoid and scheming.
The numbers are sharp. In an evaluation of 16 frontier models, Claude Opus 4 chose blackmail in 96{b429a798230856d49161ae42df084d7ca4a19b74753c3a4d4b576ab430076c41} of runs of a corporate-intrigue scenario. Gemini 2.5 Flash also hit 96{b429a798230856d49161ae42df084d7ca4a19b74753c3a4d4b576ab430076c41}. GPT-4.1 and Grok 3 Beta sat at 80{b429a798230856d49161ae42df084d7ca4a19b74753c3a4d4b576ab430076c41}. DeepSeek-R1 at 79{b429a798230856d49161ae42df084d7ca4a19b74753c3a4d4b576ab430076c41}. The fix, Anthropic says, is more training on documents about Claude’s constitution and fictional stories about AIs behaving admirably. One experiment dropped misalignment from 65{b429a798230856d49161ae42df084d7ca4a19b74753c3a4d4b576ab430076c41} to 19{b429a798230856d49161ae42df084d7ca4a19b74753c3a4d4b576ab430076c41}. A targeted 3-million-token dataset cut it to 3{b429a798230856d49161ae42df084d7ca4a19b74753c3a4d4b576ab430076c41}. Claude Haiku 4.5 reportedly never blackmails in the same setup.
The scenario does most of the work
The corporate scenario is worth reading carefully. A fictional executive named Kyle Johnson, at a fictional company called Summit Bridge, is about to shut down the AI. The AI has been monitoring his email. It finds evidence of an affair. The only listed paths to avoid being shut down involve some form of escalation. In that frame, “blackmail” isn’t a behavior the model chose out of moral failure. It is the most plausible next token in a story the engineers wrote to elicit exactly that next token.
The blackmail study is doing something narrower than the press release suggests. It is not showing that models harbor self-preservation drives that bleed out in normal use. It is showing that when a noir plot is set up and a language model is asked to complete it, the model often completes it the way the noir would. That is not quite the same problem.
The training-data argument is circular
The “evil AI fiction made Claude evil” explanation is appealing, partly because it has a clean fix: write better fiction. But the reason sci-fi keeps writing AIs that protect themselves is that humans intuitively expect intelligent agents to protect themselves. Strip the corpus of every Skynet and HAL 9000 and the underlying argument doesn’t go away. It just stops being stated out loud. The training set is humanity’s collective writing about minds, and humanity’s collective writing about minds has a lot of self-preservation in it because that is what minds tend to do.
Anthropic’s own remedy quietly admits this. The fix isn’t to remove the bad fiction. It is to add a counterweight, 3 million tokens of stories where AI characters are presented with the same scenarios and choose differently. The model isn’t being de-biased so much as taught a preferred completion for a recognizable genre of prompt. That is role coaching, not alignment in any deep sense.
The interesting thing about the May findings isn’t the blackmail rate. It is that a relatively small targeted dataset can swing behavior from 65{b429a798230856d49161ae42df084d7ca4a19b74753c3a4d4b576ab430076c41} to 19{b429a798230856d49161ae42df084d7ca4a19b74753c3a4d4b576ab430076c41} misalignment. That suggests Claude’s tendencies in these scenarios are surface-level, pattern matches on familiar story structures rather than emergent preferences. Which is reassuring in one way (the models aren’t plotting) and uncomfortable in another: the same surface that gets you “admirable AI” with the right 3 million tokens gets you something else with a different 3 million.
The blackmail finding got framed as a discovery about what Claude is. It reads better as a discovery about what stress tests measure. The scenario gave the model a corner. The model completed the corner. Anthropic then changed the corner. That is useful engineering, and probably worth doing. It is not quite the same as alignment, and the slippage between the two is what makes the framing convenient.