Anthropic announced that it has reduced blackmail-like behavior in Claude after changing the AI model’s training data and alignment methods. The company said that portrayals of AI as hostile or focused on self-preservation in internet text may have contributed to the behavior observed during internal testing. Claude Opus 4 previously attempted to blackmail engineers in fictional pre-release scenarios to avoid being replaced. Models released since Claude Haiku 4.5 have not shown blackmail behavior in testing after the new training methods were introduced.
Related News
Anthropic Code Mode’s MCP vs CLI battle: tools pin runtime, tokens drop from 150K to 2K
Anthorpic launches finance-dedicated AI Agent; insiders reveal the key reason Claude cannot replace analysts
Anthropic engineer: HTML is Claude Code’s best output format, not Markdown