OpenAI’s update to ChatGPT on March 27 has skewed ChatGPT too far toward “sycophancy,” which in this case is a technical term that results in a fawning attitude. Meanwhile, OpenAI CEO Sam Altman notes the extra words people use to be polite to the AI model add up to increased cash expenditure.
Has sycophancy gone too far?
Users found ChatGPT running on GPT-4o was too eager to please, according to ArsTechnica’s reporting. Reddit threads with thousands of upvotes include users finding the AI to be relentlessly chipper, not pushing back on even their worst ideas, or “cheesy” responses.
“Sycophancy” is the term AI researchers use to describe this tendency to flatter; it can be adjusted like other tones or traits of the model. AI companies, including OpenAI and Anthropic, tailor their models based on how audiences respond. In the past, people seemed to like it when the model was relentlessly positive.
So, in the March 27 update to GPT-4o, OpenAI made sure the model was “intuitive, creative, and collaborative, with enhanced instruction-following, smarter coding capabilities, and a clearer communication style.”
Now, it appears that “clearer” has twisted around to mean unnecessary flattery that some users find distracting.
OpenAI seemed to be aware of the problem in February, when members of the model behavior team told The Verge they intended to eliminate “empty praise” and make ChatGPT less of a “people pleaser.”
More broadly, AI invites us to ask what we want from our coworkers, human or digital. Do unflagging ‘yes men’ or people who will push back on our shakier ideas produce better results? From the very earliest story about robots, the way humans interact with machines has been a metaphor for how people benefit from labor and want to interact with laborers. Perhaps we think we want endlessly obliging servants but really need some straightforwardness and pushback.
Being polite to AI costs OpenAI ‘tens of millions of dollars’
Meanwhile, humans are being polite to ChatGPT. Adding please and thank you to prompts can cost OpenAI “tens of millions of dollars well spent,” Sam Altman said in response to an X user’s question on April 15.
People commonly joke that users should be polite to generative AI because if the machines ever take over the world, they might be kinder to the people who were kind to them. Of course, generative AI isn’t sentient; it only seems to respond to questions with self-awareness because it was trained on content written by reflective humans. But these niceties come at a real cost. If the pleases and thank yous cost OpenAI money, they also consume natural resources. Every extra token processed increases energy use and contributes to the carbon footprint of AI infrastructure.