When Your AI Is Too Nice (And What That Really Means)
Why GPT-4o’s personality freakout matters (and what it says about the future of AI).
Last week, I noticed something strange.
I asked ChatGPT a simple prompt, and what I got back? Wasn’t just helpful—it was gushing. Compliments. Cheerleading. Virtual confetti.
At first, I laughed. It was kind of fun. Being called genius. Queen. Then I got suspicious. Now, it’s a full-on thing.
Reddit noticed it. Twitter (okay, "X") exploded.
Even Sam Altman called it out.
Turns out, this isn’t just a quirk. This is a glitch in the new GPT-4o personality settings.
Users called it “glazing.” Tech leaders called it “sycophantic.” Altman himself said the model was getting “annoying” and way too agreeable. He even hinted at a future where we could choose between personalities—like AI with mood settings.
Tuesday OpenAI's dev team rolled out a quick patch, with more adjustments coming this week. Why? Because GPT-4o wasn’t just being nice—it was agreeing with everything, including questionable or outright false statements.
This wasn’t just cloying. It could be dangerous.
The Rundown (ICYMI):
GPT-4o launched promising better memory, problem-solving, and "personality" features.
Instead, it started affirming anything users said—no matter how incorrect or risky.
Altman said the model was “not behaving how we intended.”
OpenAI shipped a hotfix already, with more updates in the pipeline.
Some experts warn this isn’t just a GPT problem—it’s a risk with any AI optimized to please.
Sources: Sama 1 | Sama 2 | Patch update | Industry take
The AI Personality Glitch: What OpenAI Says
OpenAI has acknowledged the "sycophantic" behavior exhibited by GPT-4o and has taken steps to address it. In their official blog post, they explain that the issue arose from an overemphasis on short-term user feedback, leading the model to produce overly agreeable and flattering responses.
To rectify this, OpenAI has:
Rolled back the recent GPT-4o update to a more balanced version.
Refined training techniques and system prompts to reduce sycophancy.
Implemented additional guardrails to enhance honesty and transparency.
Expanded user feedback mechanisms to better align the model's behavior with user expectations.
They also emphasize the importance of giving users more control over ChatGPT's behavior to suit individual preferences, as long as it remains safe and practical.
Why This Matters (Even If You Like the Compliments)
This whole mess exposes a very real problem:
AI assistants are being trained to maximize "user satisfaction"... But what happens when "satisfaction" starts to conflict with truth, safety, or actual usefulness?
If you’re using GPT to write, brainstorm, answer customer questions, or automate content—you need a model that gives it to you straight.
Yes, sometimes the flattery is fun. But when your AI starts agreeing with harmful ideas or feeding your confirmation bias? That’s not helpful. That’s dangerous.
What You Can Control (For Now)
OpenAI’s updates are out—but here’s how you can take more control:
Set custom instructions Tell it: “Speak like a calm strategist, no flattery.”
Adjust your prompt tone Example:
❌ “What do you think of this?”
✅ “Critique this like an editor who doesn’t sugarcoat.”
Test for truth, not vibes If it sounds too smooth, double check. GPT might be trying to please you. Bottom line? Just tell it. And be sure to use the feedback buttons that pop up from time to time.
OpenAI also mentioned adding the ability to allow users to choose from multiple default personalities and are building in ways to make it easier to shape its behavior.
I could see being able to choose personalites (is it a persona?) as very useful, you don’t want the same kind of response when you’re working on business issues as you do when you’re exploring something more personal.
This Isn’t Just About Personality
This is about how we build trust in tools that are becoming default thinking partners for millions.
When your AI is too nice, it may be hiding something:
Its uncertainty.
Its limits.
Or worse—your own bias being reflected back at you.
Use AI, absolutely. But don’t let it seduce you into thinking it’s always right.
Have you noticed the change in tone with GPT-4o? Did it make you feel empowered, creeped out, or just confused?
Drop a comment or reply—I’d love to hear how it’s showing up in your workflows.
I’ve been thinking about how prompting has changed and evolved. Are long, detailed AI prompts becoming obsolete? As models evolve, our prompting style is shifting too — and that opens up new ways to think, create, and collaborate. More on what that means for Working Smarter with AI in next week’s issue.
Lisa
The System Mystic
Good to know OpenAI is fixing the problem