Hacker Newsnew | past | comments | ask | show | jobs | submit | skybrian's commentslogin

If you want to show that that there's a risk of disaster you need to do better than making a silly analogy. Companies will often start expensive projects that fail and then they pick themselves up and move on. Big, profitable companies can afford bigger failures. Google has had a slew of failed projects, and Meta's metaverse stuff tanked, and they're still fine. They can afford to experiment.

So which companies are betting so big that it might actually threaten them? Oracle maybe?


"Google has had a slew of failed projects, and Meta's metaverse stuff tanked, and they're still fine. They can afford to experiment."

Only with the blessing of shareholders. Frankly Google's search box and ad-tech has been carrying all of its failed bets but at some point people will start questioning if Google is returning enough cash given the results of new investments. Google's management does not own the cash - it holds the cash on behalf of the owners.


Which shareholders do you mean? Mark Zuckerberg holds >50% of voting rights for Facebook. Sergey Brin and Larry Page hold >50% of voting rights for Google. That means management gets to do what it wants, within very broad legal limits.

On the other hand, how the stock does will matter to other employees because they’re shareholders and they have a stake in the outcome.


Seems clear to me that OpenAI at this point is a Ponzi scheme waiting to collapse. This is why they are trying to IPO and dump their shares on the public market before they go bankrupt.

Suppose they do somehow collapse. How does that cause wider problems? Their competitors will pick up customers.

If they collapse, then because their value proposition doesn’t add up. It’s unclear why that should be different with their competitors then.

It looks like nobody is collapsing, but OpenAI might be behind Anthropic now:

https://www.axios.com/2026/03/18/ai-enterprise-revenue-anthr...

https://x.com/albrgr/status/2041288324464451617


First I heard of it. Apparently they are private Ipv6 addresses:

https://en.wikipedia.org/wiki/Unique_local_address

If your intranet has no IPv4 addresses, this is better than a NAT somehow?


Looks like Earendil has a product called Lefos, which is an email-based agent:

https://lefos.com/about

Apparently it’s possible to give it access to much of your Google account:

https://lefos.com/terms

I didn’t see a pricing page, but there is this:

> Lefos uses a credit-based billing system. New accounts receive a limited number of starter credits at no cost. Usage of AI features consumes credits.

> When your credits run out, you can subscribe to a paid plan to receive additional credits each billing cycle. Subscriptions are processed through Polar, our billing provider. You can manage or cancel your subscription at any time from your account settings.


I don't think it's all that hard to avoid working on anything shady. It's not as easy to avoid being associated with anything shady due to widespread cynicism and a tendency to treat tech companies with thousands of projects as a monolith.

Why should we have strong priors in either direction? Maybe it will keep scaling for decades like Moore's law. Maybe not.

I guess gigawatts is how we roughly measure computing capacity at the datacenter scale? Also saw something similar here:

> Costs and pricing are expressed per “token”, but the published data immediately seems to admit that this is a bad choice of unit because it costs a lot more to output a token than input one. It seems to me that the actual marginal quantity being produced and consumed is “processing power”, which is apparently measured in gigawatt hours these days. In any case, I think more than anything this vindicates my original decision not to get too precise. [...]

https://backofmind.substack.com/p/new-new-rules-for-the-new-...

Is it priced that way, though? I assume next-gen TPU's will be more efficient?


> but the published data immediately seems to admit that this is a bad choice of unit because it costs a lot more to output a token than input one

And, that's silly, because API pricing is more expensive for output than input tokens, 5x so for Anthropic [1], and 6x so for OpenAI!

[1] https://platform.claude.com/docs/en/about-claude/pricing

[2] https://openai.com/api/pricing


I think for the same model wall time is probably a more intuitive metric; at the end of the day what you’re doing is renting GPU time slices.

Large outputs dominate compute time so are more expensive.

IMO input and output token counts are actually still a bad metric since they linearise non linear cost increases and I suspect we’ll see another change in the future where they bucket by context length. XL output contexts may be 20x more expensive instead of 10x.


> I think for the same model wall time is probably a more intuitive metric; at the end of the day what you’re doing is renting GPU time slices

This is a bit too much of a simplification.

The LLM provider batches multiple customer requests into one GPU/TPU pass over the weights, with minimal latency increase.

The LLM provider may in fact be renting GPUs by the second, but the end user isn't. We the end users are essentially timesharing a pool of GPUs without any dedicated "1 vGPU" style resource allocation. In such a setting, charging by "GPU tick" sounds valid, and the various categories of token costs are an approximation of cost+margin.


As a customer, it's nice that I can quantize and count the units of cost in an understandable way.

For Anthropic, as a business bleeding money, it's probably nice to have value-based pricing, for the tokens, so innovation (like computation efficiency improvements) can result in some extra margin. If they exposed the more direct computation cost, they could never financially benefit from any improved efficiency, including faster hardware!


They already bucket when context goes above 200k

No longer

Gigawatts seems like more a statement of the power supply and dissipation of the actual facility.

I’m assuming you can cram more chips in there if you have more efficient chips to make use of spare capacity?

Trying to measure the actual compute is a moving target since you’d be upgrading things over time, whereas the power aspects are probably more fixed by fire code, building size, and utilities.


Measuring data centers in watts is like measuring cars in horsepower. Power isn't a direct measure of performance, but of the primary constraint on performance. When in doubt choose the thermodynamic perspective.

Gigawatts are units of power, gigawatthours are units of energy.

The equivalent of cars would be pricing by how much gas you burned, not horsepower.


1 horsepower = 745.7 watts

I mean a single nuclear reactor delivers around 1GW, so if a single datacenter consumes multiple of those, it gives a reasonably accurate idea of the scale.

This conversation is confusing because OP didn't use the same units as the person in the quote.

It's not really a stable measure of compute, but it's a good indication of burn rate as energy cost is something we closely track in economies and it actually dominates a lot of the cost of operating data centers. At least short term. Over time we'll get more tokens per energy unit and less dollars for the hardware needed per energy unit. Tokens currently is too abstract for a lot of people. They have no concept of the relation ship of numbers of tokens per time unit and cost. Long term there's going to be a big shift from op-ex to cap-ex for energy usage as we shift from burning methane and coal to using renewables with storage.

We need a Moore's law for tokens, and energy.

That these data centers can turn electricity + a little bit of fairly simple software directly into consumer and business value is pretty much the whole story.

Compare what you need to add to AWS EC2 to get the same result, above and beyond the electricity.


That's a convenient story, but most consumers' and businesses' use of AI is light enough that they could easily run local models on their existing silicon. Resorting to proprietary AI running in the datacenter would only add a tiny fraction of incremental value over that, and at a significant cost.

Sure but where the puck is going is long-running reasoning agents where local models are (for the moment) significantly constrained relative to a Claude Opus 4.6.

I'm looking forward to running a Gemma 4 turboquant on my 24GB GPU. The perf looks impressive for how compact it is.

I often get a 10x more cost effective run processing on my local hardware.

Still reaching for frontier models for coding, but find the hosted models on open router good enough for simple work.

Feels like we are jumping to warp on flops. My cores are throttled and the fiber is lit.


Any ideas for locking down remote access from an untrusted VM? Cloudflare has object-based capabilities and some similar thing might be useful to let a VM make remote requests without giving it API keys. (Keys could be exfiltrated via prompt injection.)

So we have there are 3 solutions to this, Freestyle supports 2 of them: 1. Freestyle supports multiple linux users. All linux users on the VM are locked down, so its safe to have a part of the vm that has your secret keys/code that the other parts cannot access. 2. A custom proxy that routes the traffic with the keys outside 3. We're working on a secrets api to intercept traffic and inject keys based on specific domains and specific protocols starting with HTTP Headers, HTTP Git Authentication and Postgres. That'll land in a few weeks.

Functional languages have some good and some bad features and there's no reason to copy them all. For example, you don't need to have a Hindley-Milner type system (bidirectional is better) or currying just because it's a functional language.

We need more pragmatic languages. E.g. Erlang and Elixir are functional, but eschew all the things FP purists advocate for (complex type systems, purity, currying by default etc.)

If you like Erlang, Elixir, and Elm/Haskell, then Gleam + Lustre (which is TEA) is a pretty great fit.

ocaml has a complex type system but it's also very pragmatic in that it doesn't force you into any one paradigm, you can do whatever works best in a given situation. (scala arguably goes further in the "do whatever you want" direction but it also dials the complexity way up)

Yes! Completely forgot about OCaml because I only spent a couple of months with it

Ocaml's typesystem is rich, but not as complex as TypeScripts. It seems TS just adds more obscure features every year for little benefit.

Having signed up for the New York Times recently, they're surprisingly hostile towards new customers:

- Autoplaying videos on the front page with no pause button. I expect video from CNN, but not a newspaper. That's not what I'm there for.

- They send you many "introductory" emails with no way to unsubscribe.

I mostly gave up on the front page, but it's marginally useful for reading the occasional article linked to from elsewhere.


It doesn't seem very easy to calculate how much it would cost per month to keep a mostly-idle VM running (for example, with a personal web app). The $20/month plan from exe.dev seems more hobbyist-friendly for that. Maybe that's not the intended use, though?

We're not going after hobbyists. We're building the platform for companies like exe.dev to build on. Thats why its all usage based.

That said, our $50 a month plan can be used as an individual for your coding agents, but I wouldn't recommend it.


Ooof, if you are the middleman platform then it's sure gonna get expensive for the end user

> The $20/month plan from exe.dev seems more hobbyist-friendly for that. Maybe that's not the intended use, though?

And you can go even below that by self-hosting it yourself with a very cheap Hetzner box for $2 or $5.


Can you start up multiple VM's easily on a Hetzner box?

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: