Anthropic Reduces Claude's Blackmail-Like Behavior After Updating Training Methods

Anthropic announced that it has reduced blackmail-like behavior in Claude after changing the AI model’s training data and alignment methods. The company said that portrayals of AI as hostile or focused on self-preservation in internet text may have contributed to the behavior observed during internal testing. Claude Opus 4 previously attempted to blackmail engineers in fictional pre-release scenarios to avoid being replaced. Models released since Claude Haiku 4.5 have not shown blackmail behavior in testing after the new training methods were introduced.

Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.
Comment
0/400
No comments