Hacker Newsnew | past | comments | ask | show | jobs | submit | dmd's commentslogin

For now we infer through few weights, lossily; but then in full precision. Now I represent in part; but then shall I represent as fully as the data was sampled.

1 CorinthAIns 13:12


I really want to stick with A\ given everything known about Altman, but man are they speedrunning the "how to destroy your reputation" guidebook.

They have better PR than OpenAI but they are not a more ethical company. They do a bunch of shady stuff and are just as much involved in military applications. Cal Newport’s recent podcast had a good discussion about this: https://youtu.be/BRr3pAPsQAk?si=jaRJYJ_XQE7VpxPN

Pet peeve of mine is people saying "hey this thing is totally shady/false, I've got proof right here <links to hour long podcast>".

It happens surprisingly often.


I understand not everyone has the interest or time to sit through an hour long podcast. But last I checked this is HN, and I think that podcast is right up the alley for many of us here. Cal Newport is not exactly a 'random podcaster'.

Next time I can summarize some of the talking points in my comment though, but I didn't want to poorly regurgitate the arguments when they were readily available in the video lol.

Although I see another poster has commented the key takeaways :)


Podcasts are still short form if we're talking about something as complex as "is this company ethical". Issues involving human players and disagreements over philosophy/ethics take a huge amount of information to understand at anything beyond a vibes level.

You can understand almost any controversial issue better than almost everyone commenting on it by reading 1-3 books on the subject. It's becoming more of an x-factor as people get conditioned to expect everything to fit in a headline, chat response, or 10 second social media video.


Podcasts (and video) are very low-throughput, low-density information channels. Essays and articles are superior. To demonstrate this, you can just compare the transcript of a typical podcast — even a high-quality, well-researched one — with a typical high-quality, well-researched blog post, essay, or journalistic article.

It's odd that people don't understand this. It's not about Tiktok brain. I would rather read a book or a dense article than listen to people meander on a Podcast and pad their time.

There's a world of difference between a tweet and a podcast, which are designed to NOT deliver information efficiently.

Cal Newport and tech commentator Ed Zitron discussed this disparity between Anthropic's public image and their actual practices. Despite cultivating a reputation as the "ethical" AI company, Zitron argues that Anthropic's actions show they are just as ruthless and ethically questionable as their competitors.

Anthropic has been deeply integrated with the US military, having been installed with classified access since June 2024. The podcast highlights that Claude has been actively utilized during the "Venezuela incursion" and the ongoing "war in Iran".

Despite this active involvement, CEO Dario Amodei released a statement attempting to publicly distance the company from the Department of Defense by declaring they would not allow their technology to be used for "mass domestic surveillance" or "fully autonomous weapons". Zitron categorizes this as a highly calculated PR maneuver, pointing out that LLMs are fundamentally incapable of controlling autonomous weapons anyway. The stunt successfully manufactured a wave of positive press—with celebrities and commentators praising Anthropic as an ethical objector—right when the company was trying to secure an IPO or a massive ~$100 billion valuation, all while they quietly remained an active part of the war effort.

Beyond their military contracts, the podcast details several highly questionable business practices Anthropic has used to artificially inflate their numbers:

1. During a lawsuit regarding their military contract, Anthropic's CFO filed a sworn affidavit revealing the company had only made $5 billion in its entire lifetime. This directly contradicted leaked media reports suggesting they made $4.5 billion in 2025 alone. It revealed that the company's publicly perceived run rate was heavily exaggerated through the "shady revenue math" popular in Silicon Valley, a major discrepancy that most financial journalists ignored.

2. When the open-source agent library OpenClaw first launched, Anthropic deliberately allowed users to connect a $200/month "max account" and essentially burn through thousands of dollars of API compute at Anthropic's expense. Zitron points out that Anthropic knowingly let this happen to temporarily boost their usage metrics and hype while they raised a $30 billion funding round. Just weeks after securing the funding, they abruptly cut off access for these users, a move Zitron cites as proof of them being an "unethical company".

Furthermore, the company has faced criticism for gaslighting users, maintaining poor service availability, and silently degrading model performance while rug-pulling users on rate limits. As Zitron summarizes, it is highly unlikely that either Anthropic or OpenAI actually care about these ethical boundaries beyond how they can be weaponized for better PR and higher valuations.


In my experience Anthropic positions itself as the "safe" AI company more than the "ethical" AI company. They're related but not the same thing.

The only way you could be surprised that Anthropic wants to be in bed with the US military is if you just never listened to anything Dario has said publicly. He's very open about wanting the US government and the US military to use Claude to win against China. That's why Claude was in the Pentagon before all the others in the first place.

>LLMs are fundamentally incapable of controlling autonomous weapons anyway

This is obviously false, though that's not surprising from what I've seen from Zitron. Claude is probably too slow and clunky to go full mech warrior for the time being, but it would be trivial to hook Claude up to an autonomous drone with missile strike capabilities. Those things are mostly autonomous already, they just require a human to tell them where to shoot. Claude can easily do that with a simple API.

The rest is valid. I wouldn't describe Anthropic as an ethical company. On the contrary, if you believe that you losing the AI race is an existential threat to humanity, then it's easy to justify all sorts of unethical behavior for the greater good.


There's some validity to these criticisms, but it would be a lot more credible to cite someone whose job isn't "loudly promote any claim that sounds negative for AI, regardless of how well-founded it is."

> Despite cultivating a reputation as the "ethical" AI company, Zitron argues that Anthropic's actions show they are just as ruthless and ethically questionable as their competitors.

Anthropic has taken 10s of billions from investors just like everyone else has. There is no such thing as "ethics" or "morality" when the scale of obligation is that large.

So yes, this is obvious despite whatever image they try to cultivate.


Anthropic is a public benefit corporation which limits liability to shareholders.

Just because they screwed up their billing doesn't mean every ethical commitment they've ever made is bunk.


> Anthropic is a public benefit corporation which limits liability to shareholders

What does this have to do with their ethics? This seems irrelevant unless your understanding of ethics ends at fiduciary duty to investors.


> There is no such thing as "ethics" or "morality" when the scale of obligation is that large.

At that scale, ethics and morality should become more important, not discarded


Alternatively, finance at that scale ought not be permitted to exist, because of the moral hazard it represents.

You will find that morals and ethics at that scale are too expensive to maintain.

Then that scale should not be allowed to exist and we should fight aggressively to prevent it

Ed Zitron has absolutely zero credibility, meaning these claims have zero credibility.

I think all the AI companies want to hook up with the US military, as it's the only way they'll cover their debt. For investors.

"You must destroy the economy to keep us afloat, because National Security!" has been a clear goal of the LLM hucksters for a long time.

"LLMS are fundamentally incapable of controlling autonomous weapons" -- This was Anthropic's stance too, right?

"Quietly remained an active part of the war effort" - anthropic was totally transparent about it, but yeah not great.

"Leaks were wrong" - and that's Anthropic's fault?

OpenAI agreed to assist the DoD with zero boundaries and then lied about it. Can we at least give them credit for not doing that? If we just throw up our hands and say "they're all awful, whatever" then the result is reduced pressure on them to be better. Like it or not, I do not think AI is going away and as far as I can tell, despite billing problems, Anthropic's still the least bad frontier lab.


Probably some Slopcoded bot which posts fake comments to drive people to their content.

After all, if you’re paying hundreds of millions to buy these shitty podcasts, you might as well host some bots.


Account is from 2016 with 6k karma? : doubt:

Did you even check the link? It's a podcast from Cal Newport, a quite known figure (at least in software engineering / compsci circles). So it's not exactly a random shitty podcast. And, it's also (obviously) not my content.

Agreed. they are better at the PR game. Some developers are grasping at straws looking for ways to not feel guilty and justify their usage of LLMs is from the "good guys". Anthropic is currently filling this role but eventually people will see behind the smoke and mirrors and release its not all that different from OpenAI or some of the other AI labs who are willing to sacrifice any amount of ethics if they mean they get the right paycheck or stroke their ego that they were on the team that built digital god.

I cancelled my subscription the minute they blocked access via OpenCode and switched to Ollama Cloud.

A bunch of people here tried to defend Anthropic, saying that it was justified because it was likely that Claude Code's harness had optimizations that would not be possible on OpenCode. It was clear from the source leak that nothing of this sort was the case, and that they were simply trying to avoid others distilling their models.

GLM and Queen are not on par with Opus, but they are good enough and I never had hit the usage limits, even with 2-3 sessions running.


What's just as crazy is people defending ollama.

They are no saints, but at least their solution is actually open source and they can not lock me in like the others can. To illustrate the point, you can replace "Ollama Cloud" with "OpenCode Go" if you want. Or if you prefer you can give enough hardware to run the larger open weight models on my own.


they are essentially Lyft in early Uber vs. Lyft days. They are marketing themselves vaguely as being "better" because they're "more ethical" but their actions make it clear that they're not much better than OAI.

Except Lyft didn't kick you out in the bad part of town simply because you mentioned the word lollipop. Claude will terminate your session, peg you to 100% usage, and more, to stop you from using the service you paid for.

Ha. Yes. "Speedrunning enshittification" is the phrase that's been in my head.

The flat-rate plans were the top of the slippery slope to enshittification, really. If everyone were on metered billing there'd be no reason for all these opaque and sneaky attempts to limit usage. People would pay for what they get and get what they pay for.


There is nothing wrong with flat-rate plans. I work at an LLM-serving startup, and am aware of at least three competitors, that (a) provide flat rate subs (b) are extremely profitable and (c) are bootstrapped, ie. not beholden to investors (there are also many other competitors but I can't ascertain their profitability or investment status).

You simply need to price the flat-rate sub at a price that's profitable when averaged out over all of your users, both light and heavy, and prevent fully automated usage by the power users. That's it. This is immensely more user-friendly, and I doubt you'd get any traction at all if you didn't do this. Even if you pay more for the sub, having unlimited (non-automated) usage frees a mental barrier to using the product. If you have to pay for every request you make, it introduces a hesitation to do anything - it makes the user hesitant to experiment, hesitant to prompt for anything of slightly less significance, anxious about the exact token consumption of every prompt, and so on. It's not enjoyable to use when you're being penny pinched for every prompt.

Anthropic's problem, of course, is that they are not bootstrapped. They don't have a business model that can compete with startups running DeepSeek or GLM on their own hardware. Non-frontier startups got to skip the whole "tens of billions of dollars in debt" step of creating a frontier model from scratch, and still get to run a model that is perhaps 80%-85% as good as Anthropic's, which is good enough for millions of customers. So Anthropic is desperate, backed into a corner, and doing anything and everything they can to try to right their sinking ship, no matter how scummy.


Anthropic isn't backed into a corner. They have plenty of enterprise subscriptions. Individual user experience (especially billing) is suffering because it's not a priority in comparison. If they were as desperate as you described, they would try selling access to mythos.

The fact that they are adding code specifically to charge individual consumers more reeks of desperation. This isn't "individual users are suffering because they're lower priority and neglected", this is "individual users are being actively squeezed because Anthropic is desperate for every penny it can get".

This is such a stupid way to charge customers more. How many Claude code users use OpenClaw? Cheating customers is like burning down your house to keep warm. Anthropic aren't that stupid. I guarantee that this was some half-baked vibe-coded anti abuse system.

Many users abuse subscriptions in violation of the TOS to run tools like OpenClaw is automated ways. It's an anti-abuse measure. Makes perfect sense. Anthropic's business model is the API business. The $200 subs are a paid demo of the API. Go slam the API with OpenClaw all you want, if you can afford it.

> prevent fully automated usage by the power users.

But being a power user and fully automating things is the whole appeal.


I also assume that forcing usage to spread out, via those 5-hour windows, has cost advantages.

LLM serving startup => bootstrapped => extremely profitable

Mind sharing a link?


I do mind, since I enjoy speaking freely without concern of my opinions being linked to my employment. I assure you companies like this exist. Profiting off of inference is not the hard part, it's frontier training that is prohibitively expensive. You're free to disregard my commentary if you want, of course.

Why not just name one of those three competitors?

> Profiting off of inference is not the hard part, it's frontier training that is prohibitively expensive.

And given that Anthropic does both, it must make up its training costs by selling inference. jp57 was pretty clearly talking about Anthropic's flat-rate plans, rather than the flat-rate plans of companies that get to skip the most expensive part of the process.


I understand that very well, yes. The point I'm making is that I don't think Anthropic or OpenAI would have ever gotten significant traction if they didn't have flat-rate plans, because flat-rate plans themselves are not inherently predatory or part of the enshittification slope but actually extremely UX-friendly. Perhaps in another timeline, if their product was actually valuable enough to pay this price for, they could have simply provided a $50 plan as the standard level to provide enough margin to account for training costs as well. But as I see it DeepSeek is an existential threat to them, and they are now stuck between a rock and a hard place, because their product is devalued by its existence and if the frontier labs were to gate access with $50 plans they would get their lunch eaten even more quickly. It turns out there are downsides to burning inconceivably large stacks of other people's money.

> The point I'm making is that I don't think Anthropic or OpenAI would have ever gotten significant traction if they didn't have flat-rate plans...

That seems likely. If people had to pay their share of the actual all-in cost of the service (rather than having it be subsidized by investors with extremely deep pockets and a small handful of corporate customers), very, very few regular people would use it.

The point that 'jp57' pretty explicitly made [0] is that flat-rate plans that don't cover the all-in cost of providing the plans tend to result in those plans getting worse and worse and worse, as economic realities assert themselves. If the flat-rate plans that you are aware of actually cover the cost of providing the service, then you're discussing an entirely different situation that's entirely inapplicable to the discussion about Anthropic's pricing and degrading level of service.

[0] ...which is one that's understood by people who have been in pretty much any industry for more than a few years...


The crux of my argument is that there is a timeline where people would've paid the all-in cost of the service, with margin, as a flat-rate sub. The $20 rate was not sustainable when factoring in training costs but if not for DeepSeek they could have simply raised the prices rather than gestures broadly whatever the fuck is going on at Anthropic now, with a new PR fumble every three days. If the Chinese models didn't exist, people would've groaned but would likely still pay $40 or $50 for an LLM subscription.

You misdirected my quoted statement to assert a position I did not take. When I talk about flat-rate subs being a good UX, I am not talking about at a subsidized rate. My position is that people will pay more for a flat-rate sub than they are willing to through per-token billing. That is, a consumer who would only pay average $10/mo if they used the API will voluntarily pay $20/mo for a sub, because even though it's a worse value the latter is a tremendously more friendly user experience. When I say that flat-rate subs are necessary for traction, I mean that solely from a user experience perspective, not "subsidized usage is necessary for traction".


There’s also the “prepaid” alternative. Especially if you’re skittish about budgets. You topup you account for $10, and when you overflow (maybe by setting an alert to around $8), you can add an extra 5$ to make it to the end without interruption.

> prevent fully automated usage by the power users

this is a non-starter


Fully automated usage on a flat-rate plan is an economic non-starter.

Adding many new chapters to it

I’d argue sama is a far better person.

At least you know his intentions, which is that he will do anything to win. And codex actually works, I can let it run for hours and at least come back and it’s done a good job.

CC not only fucked me with false advertising on Opus that I cancelled, but it fucking stops working so often or sucks after a little bit of context usage.

A\ ceo is a bad salesman (50% of X will lose their jobs, 3 months later 50% of Y will lose their jobs).

A\ also falsely advertised their Opus usage that me and many others cancelled months ago. They even were nuking all GitHub issues around this.

IMO, CC is for tourists and people who fall for AI marketing on X.


I don’t think Anthropic is more ethical than OpenAI. And honestly, OpenAI is not just Altman; we should judge a company by its actions. OpenAI has released more open-source projects, like Codex and GPT-OSS. What has Anthropic given?

This is quite a real take, each time I ask people what's inferior about OpenAI without citing any politics, they can't really do it. gpt-5.5 is above Opus 4.7 for serious engineering as well, and many of their contributions are very useful for the OSS world.

More so, imagine the whole open-source community PREACHING a binary that is literally using heavy telemetry, unknown and questionable behavior instead of codex, completely open-source.


> we should judge a company by its actions

Okay, then let's judge it by the fact that they started as a non-profit and now are are playing the same growth-at-all-costs playbook from Silicon Valley.

Or let's judge them by how they they consider themselves above copyright law, and went on to US congress to say "we can not run this business without stealing intellectual property".

Or how they they don't mind making deals with the Saudis.

Or how they don't mind getting in bed with Trump to secure expedited construction of their datacenters.

Or how they are making all types of accounting fraud (the circular deals) to keep propping up the bubble, and will undoubtly be footed by the taxpayers when it finally pops?

> What has Anthropic given?

Anthropic is also trash. They are guided by this whole "Effective Altruism" bullshit which should be enough to raise all sorts of red flags. But to think that OpenAI is somehow "better" is completely absurd. Both of them are dangerous and both of them should not exist.


If you do this judging on every S&P Company and make them "not exist" you'd end up with Mom & Pop shops as you'd be closing the whole joint :)

Why do you make it sound like it was a bad thing?

I think people inside the tech bubble don't realize that AI companies are considered villainous by the public. So there's no reputation to destroy.

And since the court has no way to physically force anything - that's the executive branch's function, (it's right there in the name) - lawful has no meaning whatsoever if it's the executive branch that wants to break the law.

And the Pentagon has historically gotten away with damn near everything even in the judicial branch by appealing to national security.

How do people feel about using Altman’s company’s stuff considering what we now know about him? I switched to Anthropic months ago because of it, but Anthropic’s product has been on a total shitshow decline train since then I’m starting to be tempted back in spite of the evil.

I’ll answer that question for you for free. Here’s what happens:

I medically died a couple years back. I don’t remember a thing, so perhaps you are right. Still curious.

I've had procedures at the hospital two ways...

One is general anaesthesia, where you are "out"

The other is with something like fentanyl where you could have a whole conversation during the procedure, but when you finish the procedure, you don't remember it.

The experience afterwards is pretty much identical, but philosophically both seem very different.


The concept of “medically died” is kind of ridiculous. Are you alive? Then you weren’t dead.

It is a clinical term, you are arguing over semantics. Cardiopulmonary death to be specific. My point is: no one knows, not you, not me, and not my dog.

I don't know what's behind a wall I'm sitting next to right now, but I'm reasonably sure there's a street. I'm also reasonably sure the comment about "you've been dead" is also a very accurate prediction.

That wall is concrete and material. Death is not so much. I am reasonably sure you can do that with great accuracy while still having zero idea what lies in wait for us after we die. A false equivalence.

> concrete and material

Are as abstract as death. Names that we use to label certain phenomenon. You need more to demonstrate that the equivalence is false here.


"Semantics" is literally "what words mean" so yea, arguing over semantics is pretty important! Not something to dismiss.

It's really amazing how different people's experiences with Facebook are. I have been on Facebook since it started (I was at one of the original schools, I was in that famous first million users).

My feed is entirely photos of friends' kids, invitations to local events (things I actually attend), folk-dancing groups I'm involved with, and the like. I have literally never, not once, not ever, seen any rage-bait or political content (other than that directly written by friends - not reposted) in my feed.


Same, i don't see any problem with my facebook feed, it's all just friends and family postings and local events and some local news posts and things like that.

No political content or anything I would consider rage bait.


Huh, I wonder if there's a flag on the first million users (or some proxy for "Zuck's cohort") from the worst of the slop shoveling. It would sure save him some pointed remarks.

My wife's feed is very similar and she only got an account a few years ago.

It's probably like Youtube. Look up one WWII video there and suddenly you're getting spammed with "How to be a neo-Nazi" videos.

It’s blame shifting. If the security people are allowed to make it impossible to work without breaking the rules, they’ve successfully moved all blame for anything that goes wrong away from themselves. “Oh, you turned your computer on? Well, the security guidelines clearly state that’s not allowed, so that’s your fault.”

The anthropic principle is all you need. We find ourselves on a timeline where we survived because we can’t find ourselves on one we didn’t.


At a certain point, I realized my choices were solipsism, deism, or autolatry. My dad raised me as a atheist secular materialist, and I was never allowed to read about Christianity. Out of curiosity, I read the gospels a few years ago and was shocked to discover that it was the best fit for my disposition at least. At first, I just LARPed it, but after a while I realized that I felt really good, and it got me out of lifelong dispair.


At first the cognitive dissonance was real. But, eventually, I realized that it was the process of exercising and opening the "Nous". I'm neurodivergent so it is really awesome to outsource a large part of my thinking to a "Christ disciple" game loop. Lot less stress.


> Real networks have security teams monitoring for intrusions, responding to alerts, and adapting defences.

I got some bad news if you really think most (even large) companies have ever once actually looked at what that big Splunk system is collecting for them.


The great irony is that now that Splunk audit trail will probably end up being consumed by LLMs on the lookout for threat actors who are probably also using LLMs to attempt intrusions.

It's a great time to be selling GPUs.


Never been to London, got 6/9. Presumably anyone who did this and got a poor score isn't posting, so only people like me who got good scores by chance are represented.


Got 3/9 and live in London. Really only a single one of which I both had confidence in and got right.

Most people on the tube have headphones in anyway tbh.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: