After witnessing great demand with its first launch, the use of GBT Chat has begun to decline, with the number of visits down by 10% compared to last month, and the number of downloads of the application also declining.
As reported by Insider, users who pay for the more robust GPT-4 model (access to which is included in GPT Chat Plus) have complained on social media and private OpenAI forums about lower quality output from the chatbot.
The general consensus is that GPT-4 was able to generate faster output, but with a lower quality level. Roblox's product lead, Peter Yang, took to Twitter to criticize GBT Chat's work, claiming that "the quality seems worse." One forum user said the latest GPT-4 experience feels "like driving a Ferrari for a month and then suddenly turning into an old pickup".
Some users have been even harsher, calling GBT chat stupid and lazy, with a long thread on the OpenAI forums filled with all sorts of complaints. According to users, there was a point a few weeks ago when GPT-4 became significantly faster but at the cost of performance. The AI community has speculated that this may be due to a shift in OpenAI's design ethos and its division into several smaller models trained in specific domains, which can work in tandem to deliver the same end result.
OpenAI has not yet officially confirmed this case, as there has been no mention of such a major change in the way GPT-4 works. But it's a credible explanation, according to industry experts like Sharon Zhou, CEO of AI builder Lamini, who called the idea of multiple paradigms the "natural next step" in GPT-4 development.
There is another pressing issue with GBT chat that some users suspect may be the cause of the recent drop in performance, and one that the AI industry seems largely unwilling to address. And if you're not familiar with the term "AI capture," it's short: Language Large Models (LLMs) like ChatGPT and Google Cool scrape public data to use when generating responses. And in recent months, there's been a real boom in AI-generated content online — including an unsolicited torrent of AI-generated novels on Kindle Unlimited — meaning it's increasingly likely that LLM will pick up material that's already been produced By artificial intelligence when searching the web for information.
This runs the risk of creating a feedback loop, whereby AI models "learn" from the content itself generated by the AI, leading to a gradual decrease in the coherence and quality of the output. And with so many LLMs now available to both professionals and the broader public, the risk of AI capture is becoming increasingly prevalent—especially since there is still no useful demonstration of how AI models can accurately differentiate between "real" information and AI-generated content.