Persona-based prompting, the near-universal practice of opening with "You are an expert programmer" or "Act as a senior data scientist," actively degrades factual accuracy in large language models. That's the finding from a new pre-print paper out of the University of Southern California, and it should change how you write system prompts.
I've been testing persona prompts in coding assistants and enterprise tools for over a year now, and I always assumed the "act as an expert" prefix was, at worst, harmless. Turns out it's not. The USC researchers found that on the MMLU benchmark, a standard test of language model knowledge, models with expert persona prompts scored 68.0% accuracy versus 71.6% for the same models without the persona. That's a 3.6 percentage point drop just from telling the model it's smart.
What the research actually found
The paper, titled "Expert Personas Improve LLM Alignment but Damage Accuracy," comes from researchers Zizhao Hu, Mohammad Rostami, and Jesse Thomason at USC. They tested persona-based prompting across both instruction-tuned and reasoning models, looking at how task type, prompt length, and placement affect results.
The core insight is straightforward: persona prompting is task-dependent. For what the researchers call "alignment-dependent" tasks, like following safety guidelines, adopting a writing style, or role-playing a character, personas genuinely help. For "pretraining-dependent" tasks, like math, coding, and factual recall, personas make things worse.
Why? Because telling a model it's an expert doesn't inject new knowledge. No facts get added. Instead, the persona prefix appears to activate the model's instruction-following mode at the expense of factual recall. The model spends capacity performing the role instead of retrieving the right answer.
The safety results were notable. A dedicated "Safety Monitor" persona boosted attack refusal rates on JailbreakBench by 17.7 percentage points, pushing refusal rates from 53.2% to 70.9%. So personas are genuinely useful for guardrails. They're just not useful for getting correct answers.
Why this matters for developers and tool builders
This lands right in the middle of how most AI-powered developer tools are built today. Open any popular prompting guide, any coding assistant's system prompt, any enterprise LLM deployment, and you'll find some version of "You are an expert full-stack developer." It's practically boilerplate at this point.
Zizhao Hu, a PhD student at USC and co-author of the study, told The Register that asking an AI to adopt the persona of an expert programmer "will not help code quality or utility." But he drew a useful distinction: granular project requirements ("use TypeScript, follow this architecture, prefer these libraries") are alignment-direction instructions that do benefit from detailed prompts. The generic "you are an expert" prefix is the part that hurts.



