Hacker Newsnew | past | comments | ask | show | jobs | submit | htrp's commentslogin

isn't this basically prophet?

No. Prophet is based on curve-fitting.

Attackers only have to be successful once while defenders have to be successful all the time?

Yes and no. Good defence is layered and an attacker needs to find a hole in each layer. Even if it is not layered intentionally a locally exploitable vulnerability gives little if you have no access to a remote system. But some asymmetry does exist.

I thought the expectation was cleaning up the balance sheet in preparation for IPO.... along with a pivot towards codegen revenue.

> But the last few weeks Opus 4.6 seems to have got dumb again. Now it is making way more mistakes and forgetting useful things and recent context it used to manage.

are you logging access patterns and times to see when the degradation occurs?


If the model "improves" every 5 hours, how do you have any guarantee of model consistency across long coding sessions?

Yeah, this feels tricky

If the model changes every few hours, we’re basically debugging against a moving target - and that gets expensive fast.


What do you need consistency for

The actual paper from April 2025

TurboQuant: Online Vector Quantization with Near-optimal Distortion Rate

https://arxiv.org/abs/2504.19874


To be fair a bunch of this is because the CEO after Nat Friedman (Thomas Dohmke) was pushed out in August 25.

And a ton of the top end ruby staff have left. Many of them ended up at shopify. There is a growing about of non ruby/rails code at github, but most of the system that people think of when they think github are ruby/rails.

Shopify is on the AI-everything train as well, we'll see how that goes.

Who was also the last CEO, right? Is this a coincidence?

Nat Friedman is a grifter who always shows up and says "I'm with these guys" when somebody or something is successful. I don't think that he was responsible for anything good.

The cursor investor pitch was we're training our own models to do coding. If your amazing model is just an RL repack, you need a new pitch to justify your 50bn valuation

https://www.bloomberg.com/news/articles/2026-03-12/ai-coding...


Any investor who believed a team their size and with their capital was training a SOTA base model doesn't understand the space. I fully believe that was some of their investors, but people acting like RL + fine tuning based on their massive user base that's producing qualitatively better outputs than the base model is meaningless aren't understanding what the company is doing.

Could you explain how much improvement RL+fine tuning can give with respect to Composer 2.0 model over Kimi K2.5? I don't fully grasp the work Cursor model has done here.

you have to own the inference layer


is the haiku comparison because they've distilled from the model?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: