Assuming you're referring to Apple Silicon's memory bandwidth, that is not necessarily because the memory is on-die. The bandwidth comes from having more channels to access memory. This gives the SoC a wider bus to increase throughput vs. your typical x86 system with two channels. For whatever reasons Intel/AMD decided that two channels is all the typical consumer chips can support now so it's on them.
You mentioned Strix Halo, which also has off-die memory. Strix Halo does have a real advantage from its wider memory bus (four channels for 256 bit instead of 128 bit), but Strix Point is equivalent-ish to Intel's platforms like Panther Lake or Arrow Lake in terms of memory setup.
In fact, Intel also had Lunar Lake, which had on-package memory. However, it was still limited to 128-bit dual-channel, so there weren't really many performance benefits; it did however help with power efficiency.
Keep in mind the Ultra 300 chips also only have recent support in the kernel. The battery life likely isn't great for now (as with previous gen Intels right after release).
It makes sense to me that for now the benchmarks would be Windows specific.
battery life for Linux is not some grand project, it's just support and setup of existing kernel drivers, typically at the distro level - which is why it's already there for some setups
This is essentially why I'm confused. All he's doing is setting up pre-existing drivers, any chimp with access to ChatGPT can do that.
I don't see that he's bringing anything noteworthy to the table, but I've repeatedly heard people talk as though he's going to bring better battery life to Linux through omarchy.
Consider reading the article, which addresses all of the points you raise.
It's directly stated in the post that the entire test is meant to be humorous, not taken seriously, only that is has vaguely followed model performance to date. The author also writes that this new result shows that trend has broken..
This article seems overly critical trying to impose a stance. I have never heard anyone say "因为雨下得很大,所以我决定不去了".
> The Sausage Sentence: English stacks relative clauses. Modern Chinese attempts to shove that complexity into a single pre-noun modifier using de (的), creating bloated, breathless sentences that tax the memory.
This is given without any evidence. "Creating bloated, breathless sentences that tax the memory" sounds like something Claude might write. IMO, 的 is far from as negative as the author (or AI) portrays it; arguably better than the multitude of English synonyms (his, her, theirs, its).
I think this is quite interesting for local AI applications. As this technology basically scales with parameter size, if there could be some ASIC for a QWen 0.5B or Google 0.3B model thrown onto a laptop motherboard it'd be very interesting.
Obviously not for any hard applications, but for significantly better autocorrect, local next word predictions, file indexing (tagging I suppose).
The efficiency of such a small model should theoretically be great!
Quick shoutout to Fooyin (https://github.com/fooyin/fooyin) which is a customizeable and very performant music player. Built on QtWidgets, so it's very snappy and themeable.
From what I know it still exists further out of the city. I have a friend who picked up quite large amount of scientific equipment. Might be a similar store with a different name though.
reply