Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I wonder why? Some possibilities:

1. Extra silicon area being used by NPUs and TPUs instead of extra performance?

2. Passmark runs under Windows, which is probably using increasing overhead with newer versions?

3. Fixes for speculative execution vulnerabilities?

4. Most computers are now "fast enough" for the average user, no need to buy top end.

5. Intel's new CPUs are slower than the old ones



> 1. Extra silicon area being used by NPUs and TPUs instead of extra performance?

I'm not an expert in silicon design, but I recall reading that there's a limit to how much power can be concentrated in a single package due to power density constraints, and that's why they are adding fluff instead of more cores.

>2. Passmark runs under Windows, which is probably using increasing overhead with newer versions?

This is a huge problem as well. I have a 2017 15" MacBook Pro and it's a decent computer, except it is horribly sluggish doing anything on it, even opening Finder. That is with a fresh install of macOS 13.7.1. If I install 10.13, it is snappy. If I bootcamp with Windows 11, it's actually quite snappy as well and I get 2 more hours of battery life somehow despite Windows in bootcamp not being able to control (power down) the dGPU like macOS can. Unfortunately I hate Windows, and Linux is very dodgy on this Mac.


That is WILD that Windows 11 runs faster than the machine's native OS...

Could this suggest that modern Windows is, dare I say it, MORE resource efficient than modern macOS?! That feels like a ludicrous statement to type, but the results seem to suggest as much.


My first thought is that Apple is throttling performance of older machines on purpose (again). As they did with the phones.

Would explain why Windows runs faster.


> My first thought is that Apple is throttling performance of older machines on purpose (again).

The battery life argument was relevant, but where one sits on the issue is going to depend on the poison one picked.

https://en.m.wikipedia.org/wiki/Batterygate


Modern macOS might be optimized for Apple Silicon CPUs. But even when it was Intel only, there were probably times when Windows was lighter, albeit bad in other ways.


It’s a microkernel OS, it’s going to be slower by its very nature. And on ARM it wastes memory due to its 16KiB pages.


Neither of them use microkernels. They are monolithic kernels with loadable module support ("hybrid" kernels). The cores of both Windows NT and XNU were originally microkernels, but then they put all of the servers into the kernel address space to make them into the monolithic kernels that they are today.


XNU apparently stands for "X is not Unix." Well that's confusing.


It actually was a recursive acronym that stood for XNU is Not UNIX. Then at some point Apple got it UNIX certified as part of MacOS.


> It’s a microkernel OS, it’s going to be slower by its very nature.

Yawn. Ever heard of https://en.wikipedia.org/wiki/L4_microkernel_family ?

Or https://en.wikipedia.org/wiki/Genode ?

(Wenn man keine Ahnung hat...)


You're comparing a modern microkernel to something originating from the 80s?

Keep your rude attitude to yourself.


It's from 1993. Also be more precise about what exactly you are meaning by writing "It’s a microkernel OS, it’s going to be slower by its very nature."

Otherwise don't be offended when people are offended by regurgitated and meanwhile disproven non-sense.

kthxbye


Why is this convo about kernels so angry? And macOS isn't even based on a microkernel.


i forgot ;)


The pessimistic viewpoint is the hardware vendor would not mind if you felt your machine was slow and were motivated to upgrade to the latest model every year. The fact that they also control the OS means they have means, motive and opportunity to slow older models if their shareholders demanded.


Or they simply keep supporting old models with new versions of the OS, even though newer software versions are optimized, and contain new features, enabled by newer hardware improvements.

If you machines have more RAM, you can use a more RAM intensive solution to speed many things up, or deliver a more computationally intensive but higher quality solution, or simply add a new previous challenging feature.

What would be interesting, is how fast to old but high spec'd models slow down. What slow downs are from optimizing for newer architectures, vs. optimizing with expectation of higher resources.


The problem seems to be CPU bound, it's got 16GB of memory and memory pressure is low. The CPU is a i7-7820HQ, though I thought that it was interesting that my iPhone XR (Apple A12) scores higher on a synthetic benchmark than my top of the line MacBook Pro from the same time.


I'm just as surprised. Also, I was using Windows 11 LTSC 2024, not the standard version, which could impact the validity of my comparison.


This doesn't feel terribly surprising to me. MacOS has always had impressively performant parts, but their upgrades always generally lower responsiveness. On modern hardware it's less perceptible obviously, and they want to sell machines, not software. But the last iteration that felt like it prioritized performance and snappiness was Snow Leopard, now twenty years ago.

I will say the problem was a lot worse before the core explosion. It was easy for a single process to bring the entire system to a drag. These days the computer is a lot better at adapting to variable loads.

I love my macs, and I'm about 10x as productive as on a windows machine after decades of daily usage (and probably a good 2x compared to linux). Performance, however, is not a good reason to use macos—it's the fact that the keybindings make sense and are coherent across an entire OS. You can use readline (emacs) bindings in any text field across the OS. And generally speaking, problems have a single solution that's relatively easy to google for. Bad-behaved apps aside (looking at you zoom and adobe) administering a mac is straightforward to reason about, and for the most part it's easy to ignore the app store, download apps, and run them by double clicking.

I love linux, but I will absolutely pay to not have to deal with X11 or Wayland for my day-to-day work. I also expect my employer to pay for this or manage my machine for me. I tried linux full-time for a year on a thinkpad and never want to go back. The only time that worked for me was working at google when someone else broadly managed my OS. macs the only unix I've ever used that felt designed around making my life easier and allowing me to focus on the work I want to do. Linux has made great strides in the last two decades but the two major changes, systemd and wayland, both indicate that the community is solving different problems than will get me to use it as a desktop. Which is fine; I prefer the mac-style to the ibm pc-style they're successfully replacing. Like KDE is very nice and usable, but it models the computer and documents and apps in a completely different way than I am used to or want to use.


I love Linux and run it on my servers, but on the desktop, it requires too much tinkering—I often spend more time troubleshooting than working. Laptops are especially problematic, with issues like Wi-Fi, sleep, and battery life. Windows is a mess—I hate the ads, bloat, and lack of a Unix-like environment. WSL2 works but feels like a hack. macOS, on the other hand, gives me full compatibility with Unix tools while also running Office and Adobe apps. Also Command+C for copy, Command+V for paste is much nicer than Control+Shift+C and Control+Shift+V. macOS does have absolutely terrible screen snapping though in comparison to Windows 11. The hardware is solid—great screen, trackpad, speakers, and battery life. I considered getting a high-end PC laptop (based on Rtings’ recommendations), but every option had compromises, either a terrible screen, terrible processor or terrible battery life. By the time I configured it to a non crappy config, it would’ve cost more than a MacBook Pro 16.


> Laptops are especially problematic, with issues like Wi-Fi, sleep, and battery life.

100% true, but I love Linux as a daily driver for development. It is the same os+architecture as the servers I am deploying to! I have had to carefully select hardware to ensure things like WiFi work and that the screen resolution does not require fractional scaling. Mac is definitely superior hardware but I enjoy being able to perf test the app on its native OS and skip things like running VMs for docker.


Same! That's One less OS that I have to remember how it works!

I'm not sure I understand the whole tinkering thing. Whenever I tinker with my Linux, it's because I decided to try something tinkery, and usually mostly because I wanted to tinker.

Like trying out that fancy new tiling Wayland WM I heard about last week...


I feel like the main times Linux requires tinkering are:

1. You're trying to run it on hardware that isn't well-supported. This is a bummer, but you can't just expect any random machine (especially a laptop) to run Linux well. If you're buying a new computer and expect to run Linux on it, do your research beforehand and make sure all the hardware in it is supported.

2. You've picked a distro that isn't boring. I run Debian testing, and then Debian stable for the first six months or so after testing is promoted to stable. (Testing is pretty stable on its own, but then once the current testing release turns into the current stable release, testing gets flooded with lots of package updates that might not be so stable, so I wait.) For people who want something even more boring, they can stick with Debian stable. If you really need a brand-new version of something that stable doesn't have, it might be in backports, or available as a Snap or Flatpak (I'm not a fan of either of these, but they're options).

3. You use a desktop environment that isn't boring. I'm super super biased here[0], but Xfce is boring and doesn't change all that much. I've been using it for more than 20 years and it still looks and behaves very similarly today as it did when I first started using it.

If you use well-supported hardware, and run a distro and desktop environment that's boring, you will generally have very few issues with Linux and rarely (if ever) have to tinker with it.

[0] Full disclosure: I maintain a couple Xfce core components.


First, thanks for Xfce. I'm a (tiny) donor.

Two kinds of linux tinkering often get aliased and cause confusion in conversations.

The first kind is the enthusiast changing their init system bootloader and package manager and "neofetch/fastfetch" and WM and... every few weeks.

The second kind is the guy who uses xfce with a hidpi display who has to google and try various combinations of xrandr(augmented with xfwm zoom feature), GDK_SCALE, QT_SCALE_FACTOR, theme that supports hidpi in the titlebar, few icons in the status tray not scaling up(wpa_gui), do all that and find out that apps that draw directly with OpenGL don't respect these settings, dealing with multiple monitors, plugging in hdmi and then unplugging messing up the audio profile and randomly muting chromium browsers, deciding whether to use xf86-* Or modesetting or whatever the fix is to get rid of screen tearing. Bluetooth/wifi. On my laptop for example I had to disable usb autosuspend lest the right hand side USB-A port stop working.

If our threshold for qualifying well-supported hardware is "not even a little tinkering required" then we are left with vanishingly few laptops. For the vast vast majority of laptops, atleast the things I mentioned above are required. All in all, it amounted to couple of kernel parameters, a pipewire config file to stop random static noise in the bluetooth sink and then a few xfce setting menu tweaks (WM window scaling and display scaling). So not that dramatic, but it is still annoying to deal with.

The 2nd kind of tinkering is annoying, and is required regardless of distro/de/wm choice since it's a function of the layers below the de/wm, mostly the kernel itself.


I think that for your example of having problems with HiDPI you might have had in mind another desktop environment than XFCE.

I have been using XFCE for more than 10 years almost exclusively with multiple HiDPI monitors.

After Linux installation I never had to do anything else except of going to XFCE/Settings/Appearance and set a suitably high value for "Custom DPI Setting".

Besides that essential setting (which scales the fonts of all applications, except for a few Java programs written by morons), it may be desirable to set a more appropriate value in "Desktop/Icons/Icon size". Also in "Panel preferences" you may want to change the size of the taskbar and the size of its icons.

You may have to also choose a suitable desktop theme, but that is a thing that you may want to do after any installation, regardless of having HiDPI monitors.


I have a 2016 MBP-15 (sticky keyboard). I suspect Apple changed something so the fans no longer go into turbo vortex mode. Normally it isn't sluggish at all, but when it overheats, everything grinds to a halt.[1] (Presumably this is to keep the defective keyboard from melting again.[0]) Perhaps old OS/bootcamp still has the original fan profiles.

[0]Apple had a unpublicized extended warrantee on these, and rebuilt the entire thing twice.

[1] kernel_task suddenly goes to 400%, Hot.app reports 24%. Very little leeway between low energy and almost dead.


I think it's that the measure of modern CPU performance- multithreaded performance is worthless and has been worthless forever.

Most software engineers don't care to write multithreaded programs, as evidenced by the 2 most popular languages - Js and Python having very little support for it.

And it's no wonder, even when engineers do know how to write such code, most IRL problems outside of benchmarks don't really yield themselves to multithreading, and due to the parallizable part being limited, IRL gains are limited.

The only performance that actually matters is single thread performance. I think users realized this and with manufacturing technology getting more expensive, companies are no longer keen on selling 16 core machines (of which the end user will likely never use more than 2-3 cores) just so they can win benchmark bragging rights.


> The only performance that actually matters is single thread performance. I think users realized this and with manufacturing technology getting more expensive, companies are no longer keen on selling 16 core machines (of which the end user will likely never use more than 2-3 cores) just so they can win benchmark bragging rights.

How can you state something like this in all seriousness? One of the most used software application has to be the browser, and right now firefox runs 107 threads on my machine with 3 tabs open. gnome-shell runs 22 threads, and all I'm doing is reading HN. It's 2025 and multicore matters.


Those threads don't necessarily exist for performance reasons - there can be many reasons one starts a thread from processing UI events, to IO completion etc. I very much doubt Firefox has an easy time saturating your CPU with work outside of benchmarks.


If Firefox smears its CPU usage over multiple threads, that leaves more single-threaded performance on the table for other apps that may need it. So there could still be an effect on overall system performance.


Well yeah - what CPU bound task do you need to be performant? Beyond many tabs - which is embarrassingly parallel - it's all either GPU, network or memory bound.

Firefox failing to saturate your CPU is a win-state.


>multithreaded performance is worthless and has been worthless forever.

I have a very opposite opinion; single threaded performance only matters to a point where any given task it's doing isn't unusable. Multithreaded performance is crucial from keeping the system from grinding to a halt because users always have multiple applications open at the same time. Five browser windows with 4-12 tabs on each, on three different browsers, 2-4 Word instances, some electron(equivalent) comms app, is much less unusual than I'd like it to be. I have used laptops with only two cores and it gave a new meaning to slow when you tried doing absolutely anything other than waiting for your one application to do something. Only having one application open at a time was somewhat useable.


> The only performance that actually matters is single thread performance.

Strong disagree, particularly on laptops.

Having some firefox thread compile/interpret/run some javascript 500 microseconds faster is not going to change my life very much.

Having four extra cores definitely will: it means i can keep more stuff open at the same time.

The pain is real, particularly on laptops: i've been searching for laptops with a high-end many-core cpu but without a dedicated gpu for years, and still haven't found anything decent.

I do run many virtual machines, containers, databases and stuff. The most "graphic-intensive" thing i run is the browser. Otherwise i spend most of my time in terminals and emacs.


The fact that multithreaded performance is worthless for you does not prove that this is true for most computer users.

During the last 20 years, i.e. during the time interval when my computers have been multi-core, their multithreaded performance has been much more important for professional uses than the single-threaded performance.

The performance with few active threads is mainly important for gamers and for the professional users who are forced by incompetent managers to use expensive proprietary applications that are licensed to be run only on a small number of cores (because the incompetent managers simultaneously avoid cheaper alternatives while not being willing to pay the license for using more CPU cores).

A decent single-threaded performance is necessary, because otherwise opening Web pages bloated by JS can feel too slow.

However, if the single-threaded performance varies by +/- 50% I do not care much. For most things where single-threaded performance matters, any reasonably recent CPU is able to do instantaneously what I am interested in.

On the other hand, where the execution time is determined strictly by the multithreaded performance, i.e. at compiling software projects or at running various engineering EDA/CAD applications, every percent of extra multithreaded performance may shorten the time until the results are ready, saving from minutes to hours or even days.


Lots of problems can be nicely parallelized, but the cost of doing so usually isn't worth it, simply because the entity writing the software isn't the entity running it, so the software vendor can just say "get a better PC, I don't care". There was a period when having high requirements was a badge of honor for video games. When a company needs to pay for the computational power, suddenly all the code becomes multithreaded.


Yes but look at the chart in the article. Both multi-threaded and single-threaded performance is getting slower on laptops. With desktops, multi-threaded is getting slower and single-threaded is staying the same.


Superscalarity is largely pointless yes, given that memory access between threads is almost always a pain, so two of them rarely process the same data and can't take advantage of using a single core's cache at the same time. It doesn't even make much sense in concept.

But multicore performance does matter significantly unless you're on a microcontroller running only one process on your entire machine. Just Chrome launches a quarter million processes by itself.


Please open your task manager and see how much stuff is running.

The times where you saw one core pegged and the rest idle are a decade ago.


Programmers failing to manufacture sufficient inefficiency to force upgrade.


Can't force upgrade with money you don't have. For a non-enthusiast, even the lowest specs are more than good enough for light media consumption and the economy would affect what they invest in.


As an experiment, I've tried working with a Dell Wyse 5070 with the memory maxed out. Even for development work, outside of some egregious compile times for large projects, it actually worked ok with Debian and XFCE for everything including some video conferencing. Even if you had money for an upgrade, it's not clear it's really necessary outside of a few niche domains. I still use my maxed out Dell Precision T1700 daily and haven't really found a reason to upgrade.


I love my wyse 5070 boxes. I keep telling people to buy them instead of pis


As an enthusiast, I’ve found mid-tier hardware from over a decade ago can run the majority of games on medium/high without much problem. And that the majority of people in my life only need a laptop that can consistently stream 1080p and run a modern browser, maybe with some extra bits like Microsoft Office (although most younger users use G Suite, presumably because schooling preferred it growing up).


I very much doubt that is the cause.


I think it's reason #4. I bought a PC 12 years ago, the only upgrade I did was buying an SSD, and now it's being used by my father, who's perfectly happy with it, because it's fast enough to check mail and run MS Office. My current gaming PC was bought 4 years ago, and it's still going strong, I can play most games on high settings in 4k (thank you DLSS). I've noticed the same pattern with smartphones - my first smartphone turned into obsolete garbage within a year, my current smartphone is el cheapo bought 4 years ago, and it shows no signs of needing to be replaced.

ok I lied, my phone is only two years old, but what happened is that two years ago my phone experienced a sudden drop test on concrete followed by pressure test of a car tyre, and it was easier to buy a new one with nearly identical specs than to repair the old one.


there's 4 which has a couple different component

* battery life is a lot higher than it used to be and a lot of devices have prioritized efficiency

* we haven't seen a ton of changes to the high-end for CPU. I struggle to think of what the killer application requiring more is right now; every consumer advancement is more GPU focused


COVID. People start working from home or otherwise spending more time on computers instead of going outside, so they buy higher-end computers. Lockdowns end, people start picking the lower-end models again. On multi-threaded tasks, the difference between more and fewer cores is larger than the incremental improvements in per-thread performance over a couple years, so replacing older high core count CPUs with newer lower core count CPUs slightly lowers the average.


I don't buy this really. COVID to now is less than 1 laptop replacement cycle for non-techie users who will usually use a laptop until it stops functioning, and I don't think the techie users would upgrade at all if their only option is worse.


A lot of laptops stop functioning because somebody spills coffee in it or runs it over with their car. There are also many organizations that just replace computers on a 3-5 year schedule even if they still work.

And worse is multi-dimensional. If you shelled out for a high-end desktop in 2020 it could have 12 or 16 cores, but a new PC would have DDR5 instead of DDR4 and possibly more of it, a faster SSD, a CPU with faster single thread performance, and then you want that but can't justify 12+ cores this time so you get 6 or 8 and everything is better except the multi-thread performance, which is worse but only slightly.

The reticence to replace them also works against you. The person who spilled coffee in their machine can't justify replacing it with something that nice, so they get a downgrade. Everyone else still has a decent machine and keeps what they have. Then the person who spilled their drink is the only one changing the average.


> (increased overhead with newer versions of Windows)

> Fixes for speculative execution vulnerabilities?

I don't know if they'll keep doing it, but hardware unboxed had been doing various tests with Windows 10 vs 11, and mitigations on vs off, as well as things like SMT on vs off for things like AMD vcache issues, or P cores vs E cores on newer intels... it's interesting to see how hardware performs 6-12 months after release, because it can really go all over the place for seemingly no reason.


It's either the Chromebook effect: people are just buying slower computers, possibly in poorer-ish countries.

Or: The average speed of new computers just isn't going up. People are just buying lower end computers because they're good enough. I think it's mostly that: the average is just trending a bit lower because lower is fine. The thirst for the high end is disappearing for most segments, because people don't need it.


4 has been true for over a decade.


Literally the only reason you need a computer less than 10 years old for day to day tasks is to keep up with the layers of javascript that are larded onto lots of websites. It's sad.

Edit: oh and electron apps. Sweet lord, shouldnt we have learned in the 90s with java applets that you shouldnt use a garbage collected language for user interfaces?


I use a GC'd language for UI every day -- Emacs -- and it runs well even on potato-tier hardware. The first GUI with all the features we'd recognize was written in a GC'd language, Smalltalk.

GCs introduce some overhead but they alone are not responsible for the bloat in Electron. JavaScript bloat is a development skill issue arising from a combination of factors, including retaining low-skill front end devs, little regard for dependency bloat, ads, and management prioritizing features over addressing performance issues.


There's also a lot of fast software written in Golang. GC has an undeserved bad reputation.


It's all nice and fast until the GC eats up all the CPU time under memory pressure (e.g. container memory limit). GC deserves its bad rep, even if GC implementations have gotten a lot better over time.

GC also disqualifies a language from ever being used for anything but userland programs.


Containers for Go software have no sense. You can cross compile stuff from anywhere to anywhere. And the binaries are self contained.


Which isn't really going to happen or be noticeable in this case. Yeah GC is slightly slower, but it's like blaming a 10-minute mile on your shoes. SDN or DBMS written in a GC'd language is probably not a good idea.


It's not that they lack the skill. It's that they don't care.

If they do care then they probably depend on someone who doesn't.


I hate the framing that developers are the people not caring. It's a ridiculous misrepresentation of the dynamics I've seen everywhere. Developers usually love optimizing stuff. However, they are under pressure to deliver functional changes. One could blame "the business" for not caring and not making time for shipping a quality product. However, these business decisions are made because that's what wins customers. Customers might complaint that that wasn't more lightweight software. But clearly their stated preference doesn't match their revealed preference.


My laptop is 11 years old and handles Slack quite well, and Jira. I eventually maxed it out at 32 GB and that probably helps. It's only an i7-4xxx though.

Java in the 90s was really slow. It got much faster with the JIT compiler. JavaScript and browsers got many optimizations too. My laptop feels faster now than it was in 2014 (it has always run some version of Linux.)

The other problem of Java was the non native widgets, which were subjectively worse than most of today's HTML/JS widgets, but that's a matter of taste.


I think even a blank Electron app is rather heavy because it's basically a whole web browser, which btw is written in C++. BUT my 2015 baseline MBP still feels fine.


I hate bloat and don't use Electron apps, but garbage collection is such a silly thing to pin it on. Tossing stuff like Lua and D in the same bin as the massive enterprise framework du jour is throwing the baby out with the bathwater.


For me the big improvement from upgrading was better responsiveness while on a video call (no small improvement in the modern world)


What is wrong with garbage collection and UI?


Latency spikes, probably. But most UIs shouldn't be churning through enough allocations to lead to noticeable GC pauses, and not all of today's GCs have major problems with pause times.

I don't think Electron's problems are that simple.


From a non expert point of view, it’s about bringing in so much dead code. Like support for every os audio subsystem for a graphic app, or the whole sandboxing thing while you’re already accessing files and running scripts. Or even the table and float layout while everything is using flex. All those things need to be initialized and hooked up. At least Cordova (mobile) runs only on the system web engine which was already slimmed down.


I work on an Electron app, and we have a ticket open to investigate why the heck it's asking the OS for Bluetooth permissions even though we'd never use it. There are, of course, higher priority things to get to first, bugs that have larger effects. I'd love to be able to get to that one…


The renderer isn't going to pause on JS execution or GC.


I have a 2005 Core 2 Duo desktop running Windows 7. The only things that make it unusable are today's Internet and Electron apps. Everything else (including Visual Studio 2019!) runs at a performance comparable to my 2020 ThinkPad with Core i7. I know it's anecdotal, I'm just saying, if we fixed the bloat on the web, most people could just keep running their 10-15 year old PCs (using Linux if you want).


i use a 10 years old macbook air and honestly i can do everything i need, statistics, light programming, browsing and photo editing. perfect.

i wait it to break before moving on, changing for the sake to changing feels like waste.


I use a 10 years old macbook pro and the only thing I dislike is the battery life and the heat generated.

I simply can't afford the replacement 16" MBP right now. Hopefully it lasts another couple of years.


Surely, the new MacBook Airs offer better performance than a 10 year old MacBook Pro. The M3 is very reasonable, 16GB/512GB for $1,100 or $1,300 for 15 inch I think. Could be going lower too with M4 MacBook Air purportedly releasing in Mar or Apr.


I have cheapest air 8gig and its just fine for anything i do if i dont open 80tabs. But for sure wait for M4 as 16gig ram will be the base model. Coworker just got MacMini m4 and i dont see why that machine wouldnt be enough for 90% of people.


My ten year old computer has an SD card slot and an HDMI port.

I would like to replace it with something that has an SD card slot and an HDMI port, as I use those frequently and don't want to deal with adapter solutions.


I just wish they upped the storage a bit. 512GB on an $1100 machine in 2025 just feels like a bad deal, especially if you already have a machine from 10 years ago that has the same amount.


But has it been true for the type of person who runs CPU benchmarks?


4 and 1.b: extra silion being used by GPUs. CPU performance isn't that important anymore.

I know, I know: all the software you use is slow and awful. But it's generally bad thanks to a failure to work around slow network and disk accesses. If you use spinning rust, you're a power user.

It's also a minority of video games that rely more on CPU performance than GPU and memory (usually for shared reasons in niche genres)


You’re forgetting the most likely possibility: it’s an artifact of data collection or a bug in the benchmarking software.


> 2. Passmark runs under Windows, which is probably using increasing overhead with newer versions?

Shouldn't be an issue. Foreground applications do get a priority boost; I don't know if Passmark increases priority of itself, tho. Provided there isn't any background indexing occurring, i.e. letting the OS idle after install.


As someone obsessing over small performance gains for his processor recently, there is definitely overhead to consider. Note by default you get things now like Teams, Copilot, the revised search indexing system, Skype, etc. Passmark in many tests will max out all cores at once; this tends to make it sensitive to 'background' processes (even small ones).


People are buying more HP Streams and fewer ASUS ROGs than in previous years?


ARM?


Maybe it's all the new Intel CPUs that failed?


It’s everyone jumping ship and switching from 14th gen to AMD


The number of people who swap CPUs or even pay attention to the "Intel inside" sticker on a PC must be very small.


I mean the CPUs were tested early on, but later they failed, and no longer raise the average. It should be visible in more detailed statistics if it is so.


People spending less on new computers.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: