Hacker Newsnew | past | comments | ask | show | jobs | submit | slashdave's commentslogin

Proving a negative is a pretty high bar. You also have the problem of defining "real intelligence", which I suspect you can't.

Intelligence is Intelligence. It's intelligent because it does intelligent things. If someone feels the need to add a 'real' and 'fake' moniker to it so they can exclude the machine and make themselves feel better (or for whatever reason) then they are the one meant to be doing the defining, and to tell us how it can be tested for. If they can't, then there's no reason to pay attention to any of it. It's the equivalent of nonsensical rambling. At the end of the day, the semantic quibbling won't change anything.

There is, but it is fractured. I would equate this effort as more of a standardization of terms and language.

There is an analogy with statistical mechanics. It's not crazy.

Sane & interesting enough to have been disproven, by Boaz Barak iirc. Maybe not surprising since simulated annealing never achieved the results of gradient descent + backprop.

biology thrives in complexity though; yet all the electrons are identical

Deep-learning hinges on highly redundant solution space (highly redundant weights), along with normalized weights (optimization methodology is commoditized). The original neural network work had no such concepts.

They need compute

Not popular? Who asks someone to break their confidential agreements in front of them, and why would you hire someone who would do that so easily?

> Apple _might_ be a slightly closer analog to Meta in that they're just a bit more limited

Seriously? Walk outside and see what people are holding in their hand.


"if you have to ask..."

Compute is limited worldwide. No amount of money can make these compute platforms appear overnight. They are buying time because the only other option is to stop accepting customers.

They would honestly have been better off refusing customers if compute is so limited. Degrading the quality leads to customers leaving in the short term, and ruins their long term reputation.

But in either case, if compute is so limited, they’ll have to compete with local coding agents. Qwen3.6-27B is good enough to beat having to wait until 5PM for your Claude Code limit to reset.


The recent Deepseek release probably has them more worried. But locally running these large models requires a lot of infra expertise. Market impact will be minimal. Not to mention the companies that can pull this off have enough cash to just pay Anthropic to begin with.

They need to keep up with demand, because compute resources are clearly limited. That means they have no choice but to add these features, or things break, or they have to stop taking new customers. All of those options are unacceptable.

They're losing customers because of quality concerns. Pausing development and focusing 100% on quality is how you fix that.

That said, that may not have been obvious at all in the Jan/Feb time frame when they got a wave of customers due to ethical concerns.


No. Pausing development does not make compute (you know, physical machines?) appear out of thin air.

On the other hand, sacrificing your paying customers at the altar of compute and tokens does not make money appear out of thin air.

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: