Anthropic Identifies Three Product-Layer Changes Behind Claude Code Quality Decline, Not Model Issue

Gate News message, April 23 — Anthropic’s engineering team confirmed that the Claude Code quality degradation reported by users over the past month stemmed from three independent product-layer changes, not from API or underlying model issues. The three problems were fixed on April 7, April 10, and April 20 respectively, with the final version now at v2.1.116.

The first change occurred on March 4, when the team reduced the default reasoning effort level for Claude Code from “high” to “medium” to address occasional extreme latency spikes in Opus 4.6 under high reasoning intensity. After widespread user complaints about reduced performance, the team reverted the change on April 7. The current default is now “xhigh” for Opus 4.7 and “high” for other models.

The second issue was a bug introduced on March 26. The system was designed to clear old reasoning records after conversation inactivity exceeded one hour to reduce session recovery costs. However, a flaw in implementation caused the clearing to execute repeatedly on every subsequent turn rather than once, causing the model to progressively lose prior reasoning context. This manifested as increasing forgetfulness, repeated operations, and abnormal tool invocations. The bug also resulted in cache misses on every request, accelerating user quota consumption. Two unrelated internal experiments masked the reproduction conditions, extending the debugging process to over a week. After fixing on April 10, the team reviewed problematic code using Opus 4.7 and found that Opus 4.7 could identify the bug while Opus 4.6 could not.

The third change launched on April 16 alongside Opus 4.7. The team added instructions to the system prompt to reduce redundant output. Internal testing over several weeks showed no regression, but post-launch interaction with other prompts degraded coding quality. Extended evaluation revealed a 3% performance drop in both Opus 4.6 and 4.7, leading to a rollback on April 20.

These three changes affected different user groups at different times, and their combined effect created widespread and inconsistent quality decline, complicating diagnosis. Anthropic stated it will now require more internal employees to use the same public build version as users, run full model evaluation suites for every system prompt modification, and implement staged rollout periods. As compensation, Anthropic has reset usage quotas for all subscription users.

免责声明:本页面信息可能来自第三方,不代表 Gate 的观点或意见。页面显示的内容仅供参考,不构成任何财务、投资或法律建议。Gate 对信息的准确性、完整性不作保证,对因使用本信息而产生的任何损失不承担责任。虚拟资产投资属高风险行为,价格波动剧烈,您可能损失全部投资本金。请充分了解相关风险,并根据自身财务状况和风险承受能力谨慎决策。具体内容详见声明

相关文章

X(推特)迎来 20 年最大广告平台升级,xAI 介入,AI 语义投放成为核心

X 宣布自 2026 年 4 月起推出 20 年来最大广告平台改造,重建底层技术并结合 xAI。新平台以 AI 驱动成效优化、语意与情境广告为核心,提升操作便利与投放控管,目标把广告转化为即时报文语境的商业信号,并配合 Everything App 策略成为 X 生态的商业引擎。

鏈新聞abmedia48 分钟前

OpenAI 背书的 1X 在加利福尼亚开设 58,000 平方英尺工厂,首年目标部署 10,000 台机器人

据彭博社报道,总部位于挪威、由 OpenAI 支持的机器人初创公司 1X Technologies 已在加利福尼亚州海沃德开设了一座占地 58,000 平方英尺的制造工厂,目标是引领量产面向消费者的类人机器人。 该工厂预计将生产 10,000 台机器人,随后在其产能范围内持续推进。

GateNews3小时前

白宫起草人工智能政策备忘录,要求美国各机构在 4 月 30 日使用多家 AI 提供商

据 PANews 在 4 月 30 日援引的消息,白宫官员正在起草一份广泛的人工智能政策备忘录,要求美国政府机构采用多家 AI 服务提供商,并避免依赖单一供应商。该备忘录还要求所有被签约的 AI 公司

GateNews3小时前

中国网信办于 4 月 30 日启动为期 4 个月的专项活动,遏制 AI 应用混乱现象

据央视新闻,中国国家互联网信息办公室于 4 月 30 日启动为期四个月的全国性行动,以整治 AI 应用混乱。该行动分两阶段部署,旨在解决包括模型注册缺失、平台安全与审核能力不足在内的问题,

GateNews4小时前

Forefront Tech 完成 $100M IPO 定价,纳斯达克在代码 FTHAU 下上市

据 ChainCatcher 称,特殊目的收购公司 Forefront Tech 于 4 月 30 日完成 1 亿美元的 IPO 定价,并将在纳斯达克上市,股票代码为 FTHAU。该公司计划使用募集资金把握区块链、金融科技、人工智能方面的并购机会

GateNews5小时前

Anthropic Claude Code 因计费故障向用户多收 200.98 美元,最初拒绝退款,后才完成全额补偿

根据 Beating 的监控,Anthropic 的 Claude Code 服务中的一个计费漏洞导致一名 Max 20x 订阅者在仅使用其每月配额的 13% 的情况下,被额外多收了 200.98 美元的使用费。该漏洞是在用户的 git 仓库提交历史包含大写的情况下触发的

GateNews5小时前
评论
0/400
暂无评论