On top of this, I find ads on app store search results to be particularly bad. It's a business model that comes with all sorts of perverse incentives. It's so bad for users and legitimate developers.
The Mac Studio, in some ways, is in a class of its own for LLM inference. I think this is Apple leaning into that. They didn't add RDMA for general server clustering usefulness. They added it so you can put 4 Studios together in an LLM inferencing cluster exactly as demonstrated in the article.
It almost certainly is not. Until we know what the useful life of NVIDIA GPUs are, then it's impossible to determine whether this is profitable or not.
The depreciation schedule isn't as big a factor as you'd think.
The marginal cost of an API call is small relative to what users pay, and utilization rates at scale are pretty high. You don't need perfect certainty about GPU lifespan to see that the spread between cost-per-token and revenue-per-token leaves a lot of room.
And datacenter GPUs have been running inference workloads for years now, so companies have a good idea of rates of failure and obsolescence. They're not throwing away two-year-old chips.
> The marginal cost of an API call is small relative to what users pay, and utilization rates at scale are pretty high.
How do you know this?
> You don't need perfect certainty about GPU lifespan to see that the spread between cost-per-token and revenue-per-token leaves a lot of room.
You can't even speculate this spread without knowing even a rough idea of cost-per-token. Currently, it's total paper math on what the cost-per-token is.
> And datacenter GPUs have been running inference workloads for years now,
And inference resource intensity is a moving target. If a new model comes out that requires 2x the amount of resources now.
> They're not throwing away two-year-old chips.
Maybe, but they'll be replaced by either (a) a higher performance GPU that can deliver the same results with less energy, less physical density, and less cooling or (b) the extended support costs becomes financially untenable.
I was thinking more Lua/Luaua which make it trivial to restrict permissions. In general, the gaming client has access to a lot more information than it shares, so to prevent cheats from plugins, the developers have to be explicit about security boundaries.
> many people have never actually seen the colour "violet" which is a single wavelength of visible light
The violet seen in a rainbow (in nature, not a photo) is legit single wavelength violet. Same with the rainbows created from shining white light through a prism.
It's true that you don't really get to see it in isolation very often though. Maybe some flowers, birds, or butterflies? Or maybe the purple glow you get from UV lights?
Because the cone isn't really a "blue" cone, and neither is the "red" one. The curves overlap in complex ways. A pure violet photon also slightly stimulates the long wavelength cone.
That's why red+blue=purple feels a bit like violet. It creates a similar double firing.
(And why red plus green gives an even more accurate yellow. The long and medium cones have a lot of overlap.)
This is a common misconception, but the sensitivity of L cones ("red" cones) increases monotonically until about 570nm (monochromatic yellow), so violet light stimulates L cones the least out of all visible wavelengths of light. Magenta light, a mixture of red and blue wavelengths, stimulates L cones far more than violet light. See Wikipedia's LMS responsivity plot[1] or the cone fundamental tables from the Color & Vision Research Laboratory at [2].
I think the misconception comes from plots of XYZ color matching functions[3]. The X color matching function indeed has a local maximum in the short wavelengths, but X doesn't represent L cone stimulation; it's a mathematically derived curve used to define the XYZ color space, which is a linear transform of LMS color space selected for useful mathematical properties.
It is technically the bluest color possible. What we perceive as true blue is different, and the brain has the weird imaginary magenta gradient between blue and red to confuse.
First of all, all colors are imagined only in our minds.
Second, the term imaginary color already exists, and it refers to a specific thing [0], and the colors on the line of purple are not one of them. What you are describing is a non-spectral color. They exist in day to day life and in nature, they simply do not have an associated wavelength.
What exactly are you trying to prove? The gradient between red and blue (magentas) are the only fully saturated colors that we can perceive, which aren't part of the electromagnetic spectrum. That's fantastic. Do you want to waste your life arguing about nothing instead of enjoying the miracles of nature?
The patent on the lockout mechanism has expired and clean software implementations of the algorithm have been created. So the old legal protections no longer apply.
And while Nintendo still aggressively enforces their copyright on their old games, they probably don't care very much about unlicensed games being created for their very old hardware. It's just not commercially relevant to them.
> But this means his pro and con opinions don't match typical opinions and this makes him polarizing. And hence some people will flag his articles reflexively or post reflexive dismissals. And Hacker News is heavily weighted to downrank polarizing articles.
I suspect this is it. A subset of users flag and/or downvote daringfireball on sight if it reaches the front page and the HN algorithm treats that as a strong single
I've seen the same. I don't think it's the reboot. My understanding is that NAND undergoes wear-leveling even when it is read only. The card shuffles data around its storage even when it hasn't been written to. And the firmware is unreliable.
> Why mount the SD card read/write. Why not mount read-only.
I have seen dozens of name brand SD cards fail while mounted read only in a medium sized deployment of Pis. The problems mostly come from firmware bugs, not from nand wear.