Hacker Newsnew | past | comments | ask | show | jobs | submit | rishabhaiover's commentslogin

After a certain amount of context usage, I think I empirically see the stated issues with Top-K compression strategy. It doesn't catastrophically forget but nuances fade as I reach towards the tail end of my context limits.

Yeah, that’s consistent. topK keeps the obvious tokens, but subtle context gets eroded over time rather than dropped all at once.

I mean Anthropic clearly wins with the name (Mythos vs 'GPT-5.4-Cyber')

And the worse part is the company is gaslighting people when they report it

This was obviously a fictional thanksgiving dinner. Nobody is this geezed up about AI assistance.

Nobody in your circle of friends/acquaintances perhaps.

You're okay with sitting at the rear seat of a car while it drives you around the city though.

Can't speak for anyone else, but absolutely no. I don't have any interest in self driving cars.

I would absolutely stop eating a meal if I learned AI was involved in creating it. I suppose I wouldn't literally spit it out but I wouldn't take another bite.

Really? It's just a better way to search for recipes in Mr experience

If they used AI to search for it that's different. I meant if they used AI to generate the recipe.

Why? What if you found out a human was involved in creating it?

First, I would find it disrespectful. But second, I would be concerned that the LLM would tell the human to do something dangerous (like undercooking chicken) and since the human is apparently so desperate and clueless that they're using an LLM they wouldn't know it was a problem.

I would have not believed your argument 3 months ago but I strongly suspect Anthropic actively engages in model quality throttling due to their compute constraints. Their recent deal for multi GWs worth of data center might help them correct their approach.


For what it's worth Anthropic explicity denies that. "To state it plainly: We never reduce model quality due to demand, time of day, or server load"

Also can see https://marginlab.ai/trackers/claude-code/

It's very interesting to me how widespread this conception is. Maybe it's as simple as LLM productivity degrading over time within a project, as slop compounds.

Or more recently since they added a 1m context window, maybe people are more reckless with context usage


It has nothing to do with the context window. Reasoning brought measured approaches grounded with actual tool calls. All of that short-circuits into a quick fix approach that is unlike Opus-4.5 or 4.6. Sonnet-4.5 used to do that. My context window is always < 200K.

That still leaves open the possibility that they reduce model quality due to profit. ;p


Posted this a while ago:

>Models are not "degrading". They're not being "secretly quantized". And no one is swapping out your 1.2T frontier behemoth for a cheap 120B toy and hoping you wouldn't notice!

>It's just that humans are completely full of shit, and can't be trusted to measure LLM performance objectively!

>Every time you use an LLM, you learn its capability profile better. You start using it more aggressively at what it's "good" at, until you find the limits and expose the flaws. You start paying attention to the more subtle issues you overlooked at first. Your honeymoon period wears off and you see that "the model got dumber". It didn't. You got better at pushing it to its limits, exposing the ways in which it was always dumb.

>Now, will the likes of Anthropic just "API error: overloaded" you on any day of the week that ends in Y? Will they reduce your usage quotas and hope that you don't notice because they never gave you a number anyway? Oh, definitely. But that "they're making the models WORSE" bullshit lives in people's heads way more than in any reality.


I suspect it's not that people do not see the progress, they fail to fully trust laws not truly backed by physics like the transistor laws. We empirically see that scaling works and continue to work.



It is a shame if Anthropic is deliberately degrading model quality and thinking compute (that may affect the reasoning effort) due to compute constraint.


Nope, there is a categorical degradation in quality of output, especially with medium to high effort thinking tasks.


The downtime forces me to relook at my utterly dependent relationship with agentic assistance. The inertia to begin engaging with my code is higher than it has ever been.


Yeah. It's actually starting to make me anxious. I think I got addicted to these agents.


OAuth is failing, I can't login via claude code.


Same here. Usage limits are still pretty insane too.


Same here.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: