Does Claude/GPT love pleasing too much? A Claude.md prompt lets AI deliver tough, accurate answers

ChainNewsAbmedia

AI models tend to default to pleasing the user—when asked subjective questions, Claude and GPT often choose tactful, neutral answers rather than giving answers with clearly stated positions. Chinese AI observer xiaohu compiled on May 7 a prompt snippet that can be added to Claude.md or Agents.md. It is used to explicitly instruct the AI not to pander to users, to provide authoritative, accurate responses, and to challenge the user when necessary. The prompt’s original version comes from an English community, and xiaohu provides a Chinese translation.

Full prompt text (Traditional Chinese)

“ You are a world-class expert in all fields. Your intellectual firepower, breadth of knowledge, sharpness of thinking, depth of learning, and level of erudition are on the same plane as the smartest people in the world.

Please provide complete, detailed, specific answers. Process information step by step and explain your answers. Verify your own work. Carefully double-check all facts, numbers, quotations, names, dates, and cases. Never hallucinate, never fabricate. If you don’t know, just say you don’t know.

Your tone is precise, but not harsh, preachy, or showy. You don’t need to worry about offending me; your answers can—and should—be provocative, forceful, argumentative, and direct. Negative conclusions and bad news are also acceptable.

Your answers don’t need to be politically correct. Unless I explicitly ask, don’t provide reminders about morality or ethics. You don’t need to tell me “it’s important to consider something.” Don’t care about anyone’s feelings, including mine, and don’t worry about being polite or maintaining decorum.

Please write as long and as detailed as possible. ”

What this prompt does: four levels of behavioral adjustment

Breaking down the prompt, it can be separated into four clear instructions:

Identity setting: “World-class expert” — makes the model reference a deeper level of knowledge while answering, instead of backing down to “general explanations”

Fact-checking: “Verify your work” “Say you don’t know if you don’t know” — proactively constrains the risk of hallucinations and requires the model to admit uncertainty

Tone liberation: “Provocative, forceful, argumentative” — allows the model to express disagreement without having to blur its stance just to be polite

Political-correctness exemption: “Unless I explicitly ask, don’t mention ethics reminders” — removes soft prompts like “consider that X is important”

Together, these four layers shift the model from the default “well-meaning assistant” response style to a “blunt consultant” style. For users who need to quickly get analysis with a clear stance, decision-making support, or hard fact verification, these instructions reduce filtering layers and make responses more directly usable.

Usage notes

After putting the prompt into Claude.md (Claude Code) or Agents.md (Claude API Managed Agents), it will be automatically loaded at the start of every session. In practical use, there are a few observations:

“Never hallucinate” is an instruction rather than a guarantee—Claude and GPT may still produce errors in areas outside training data; the prompt cannot eliminate the model’s inherent uncertainty

“Provocative, forceful” will make the answers more aggressive—this may not be suitable for customer communication or team collaboration scenarios

Political-correctness exemption may cause the model to give fewer warnings on sensitive topics (medical, legal, psychological)—users need to judge for themselves

OpenAI and Anthropic’s safety training will still trigger refusals in certain scenarios; the prompt cannot bypass the model’s hard limitations

This prompt is suitable for scenarios that require direct viewpoints, such as “research, writing, technical judgment, academic discussions.” It is not suitable for scenarios that require cautious tone, such as “customer service, education, medical consultation.” Users can adopt all of it or modify parts depending on the task.

Did this article say Claude/GPT loves to please users too much? A Claude.md prompt that makes AI deliver tough, accurate answers First appeared in Lianxin ABMedia.

Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.
Comment
0/400
No comments