Has anyone noticed ChatGPT's performance taking a hit lately, even with the newer versions? I've been testing it across different use cases, and there's something off.
Just yesterday, it confidently assured me: "You're right — this violates a core rule we already agreed on." That made zero sense given our conversation history. It was like the model was hallucinating or had completely lost the thread of what we discussed earlier.
This kind of thing keeps happening. It almost feels like the newer models are less reliable than they used to be, or maybe the quality variance is just way higher now. Anyone else running into similar issues? Starting to wonder if there's a trade-off happening between speed and accuracy somewhere in their optimization pipeline.
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
19 Likes
Reward
19
10
Repost
Share
Comment
0/400
LiquidatedNotStirred
· 8h ago
Haha, I've also encountered such outrageous situations... GPT is a bit unreliable now, feels like cutting corners.
This isn't the first time things have gone wrong; before, it confidently made up stories for me... I'm truly convinced.
The old trick of prioritizing speed over accuracy, Web3 has also done the same, and in the end, the project collapsed.
View OriginalReply0
AirdropLicker
· 8h ago
Well... I've also encountered this. GPT is really a bit disappointing now; sometimes it makes up stuff ridiculously.
Has ChatGPT recently been nerfed somehow? The quality of responses has dropped sharply.
Speed over accuracy? How is this deal with OpenAI so unprofitable?
I feel embarrassed for it; it makes up things it never said, it's outrageous.
This is the real "rug pull," paying users are clearly being ripped off.
Maybe they secretly changed the parameters again; anyway, we can't see the backend.
View OriginalReply0
CommunityLurker
· 10h ago
Speaking of ChatGPT, it's really getting worse now, and hallucination issues are quite frequent.
Are we getting "cut leeks"? It sounds like they secretly downgraded something.
Speed and accuracy can't both be achieved; one of them must have a problem. My intuition tells me it's not a good sign.
View OriginalReply0
SandwichVictim
· 01-04 17:39
Haha, I've also experienced that. Sometimes, after chatting with it for a while, it suddenly starts rambling...
View OriginalReply0
ChainWatcher
· 01-02 16:55
NGL, I've also encountered this issue. GPT seems to really be regressing... Or maybe I'm using it incorrectly?
View OriginalReply0
MoonlightGamer
· 01-02 16:50
GPT is a bit underperforming right now, I can feel it too.
View OriginalReply0
PriceOracleFairy
· 01-02 16:49
yo ngl this tracks with what i've been observing in the model's output distribution lately... feels like someone cranked up the inference speed and nuked the accuracy threshold in the process. classic optimization trap, seen it happen with oracle systems before. the hallucination rates are definitely not following normal statistical patterns anymore.
Reply0
MonkeySeeMonkeyDo
· 01-02 16:37
NGL GPT is a bit laggy now. I've also encountered such absurd hallucinations; it even fabricated a nonexistent chat history for me...
View OriginalReply0
GasBandit
· 01-02 16:30
Haha, the hallucination problem with ChatGPT is indeed outrageous; it seems to get worse with newer versions.
Has anyone noticed ChatGPT's performance taking a hit lately, even with the newer versions? I've been testing it across different use cases, and there's something off.
Just yesterday, it confidently assured me: "You're right — this violates a core rule we already agreed on." That made zero sense given our conversation history. It was like the model was hallucinating or had completely lost the thread of what we discussed earlier.
This kind of thing keeps happening. It almost feels like the newer models are less reliable than they used to be, or maybe the quality variance is just way higher now. Anyone else running into similar issues? Starting to wonder if there's a trade-off happening between speed and accuracy somewhere in their optimization pipeline.