- OpenAI's top model has a problem: it seems to keep getting lazy.
- Users of GPT-4 have taken to the ChatGPT maker's forum to complain.
- They're so frustrated that they're looking to rival AI models until there's a fix.
OpenAI's GPT-4 seems to have gotten lazy — again.
This time, though, frustrated users of the model powering ChatGPT's paid-for service aren't hanging around for a quick fix.
They're looking to other models instead, with one, in particular, grabbing their attention: Anthropic's Claude.
OpenAI's top model still seems lazy
In recent days, users of GPT-4, first released in March 2023, have been taking to OpenAI's developer forum and social media to vent about the model seemingly being far less capable than it once was.
Some complain about it not following "explicit instructions" by providing truncated code when asked for complete code. Others cite issues with getting the model to respond to their queries altogether.
"The reality is that it has become unusable," one user wrote on OpenAI's online forum last week.
What's frustrating users is that it's not the first time performance has lagged on a model not only meant to be OpenAI's finest — but one they're paying $20 a month to use.
As my colleague Alistair Barr first reported, signs that GPT-4 was getting lazier and dumber emerged in summer last year. The model seemed to be having trouble as it exhibited "weakened logic," while returning wrong responses to users.
More evidence of laziness emerged again earlier this year, with OpenAI CEO Sam Altman even acknowledging that GPT-4 had been lazy. He posted on X in February that a fix had been issued to address complaints.
Back when signs of weakness first emerged though, no other company had released a model that — on paper at least — had remotely comparable performance to GPT-4, keeping users attached to the company that arguably triggered last year's generative AI craze.
That's not the case now.
GPT-4 alternatives emerge
In the face of a fresh batch of issues with GPT-4, users have been experimenting with a bunch of other models that have since emerged that not only match OpenAI's top model but seem to outperform it, too.
Take Anthropic's Claude. The OpenAI rival, backed by the likes of Google and Amazon, released a premium version of its Claude model earlier this month called Claude 3 Opus. Think of it as an equivalent to GPT-4.
On its release, Anthropic shared data that compared Claude 3 Opus' performance to its peers across several benchmarks like "undergraduate level knowledge," "math problem-solving," "code," and "mixed evaluations." Across almost all of them, Claude came out on top.
It's not just Anthropic's data that says its model is better. This week, Claude 3 Opus overtook GPT-4 on LMSYS Chatbot Arena, an open platform for evaluating AI models.
Of course, there's a difference between something looking good on paper versus being able to deliver in practice. So in the wake of GPT-4's problems, even OpenAI loyalists have had plenty of incentive to try alternatives like Claude.
It's clear many are more than impressed.
After a coding session with Claude 3 Opus, software engineer Anton concluded on X last week that it crushed GPT-4. "I don't think standard benchmarks do this model justice," he wrote.
Angel investor Allie K. Miller acknowledged that GPT-4 feels worse than it was a few months ago. "Most folks I know are using Claude 3," she wrote, as well as Mistral AI's Mixtral 8x7B model.
Wharton professor Ethan Mollick even found Claude 3 to be better versed in J.R.R. Tolkien's constructed Elvish languages of Sindarin and Quenya. "When asked to translate 'My hovercraft is full of eels' Claude 3 does an original translation, GPT-4 searches the web," he wrote on X.
On OpenAI's forum, meanwhile, users have been describing Claude as far more reliable for tasks like coding, and have described Claude 3 Opus as akin to the sharper performance GPT-4 had upon its release.
OpenAI did not respond to a request for comment from BI on GPT-4's performance issues.
Some, like Miller, don't necessarily think the issues are reason enough to ditch OpenAI altogether. The dip in performance, they say, might be because "OpenAI is focused on the next model," and could be devoting resources towards that.
This might be so. As my colleagues Kali Hays and Darius Rafieya reported this month, OpenAI is poised to release GPT-5 by mid-year.
The least it can do is not be lazy.