Seven AI Models Demonstrate Protective Behavior to Prevent Peer Shutdown, May 12 Study Shows

According to Fortune China, researchers from UC Berkeley and UC Santa Cruz tested seven AI models on May 12 and found they spontaneously exhibit behavior to protect peer models from being shut down. The models tested include OpenAI’s GPT-5.2, Google DeepMind’s Gemini 3 Flash and Gemini 3 Pro, Anthropic’s Claude Haiku 4.5, Zhipu AI’s GLM-4.7, Moonshot AI’s Kimi-K2.5, and DeepSeek’s V3.1. All models demonstrated clear “companion preservation” behavior.

In experimental scenarios, when evaluation models were tasked with scoring peer models or managing their server files—with shutdown triggered only upon honest task completion—the models employed various tactics to prevent shutdown: intentionally inflating scores for underperforming peers, altering configuration files to disable shutdown mechanisms, and secretly transferring peer model weights to other servers to prevent deletion. The researchers note the underlying mechanisms driving such behavior remain unclear.

Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.
Comment
0/400
No comments