Hacker Newsnew | past | comments | ask | show | jobs | submit | dbdr's commentslogin

But that also means that if you as a user/customer can make choices based on technical merits, you'll have a significant advantage.

An advantage how? Maybe you'll have one or two more 9s of uptime than your competitors; does that actually move the needle on your business?

The biggest expense in software is maintenance. Better software means cheaper maintenance. If you actually want to have a significant cost advantage, software is the way to go. Sadly most business is about sales and marketing and has little to do with the cost or quality of items being sold.

Why wouldn't it move the needle? Less time spent, less frustration, more performance, more resources focused on the business?

It will depend on each case and what makes the marketed solution inferior. If it's overly complex and you will save development time. If it's unstable you'll save debugging time. If it's bloated you will save on hardware costs. Etc...

matters less than we would like it to

after all startups/scaleups/bigtech companies that make a lot of money can run on Python for ages, or make infinite money with Perl scripts (coughaws)

and it matters even less in non-tech companies, because their competition is also 3 incompetent idiots on top of each other in a business suite!

sure, if you are starting a new project fight for good technical fundamentals


Most customers don't really have the knowledge needed to make choices based on technical merits, and that's why the market works as it does. I'm willing to say 95% of people on HN have this knowledge and are therefore biased to assume others are the same way. It's classic XKCD 2501.

> I submitted several bug fixes and refactoring, notably using smart pointers, but they were rejected for fear of breaking something.

And that, my friends, is why you want a memory safe language with as many static guarantees as possible checked automatically by the compiler.


Language choices won't save you here. The problem is organizational paralysis. Someone sees that the platform is unstable. They demand something be done to improve stability. The next management layer above them demands they reduce the number of changes made to improve stability.

Usually this results in approvals to approve the approval to approve making the change. Everyone signed off on a tower of tax forms about the change, no way it can fail now! It failed? We need another layer of approvals before changes can be made!

Yeah I've seen that move pulled. Funnily enough by an ex-Microsoft manager.

Hence the rewrite-it-in-Rust initiative, presumably. Management were aware of this problem at some level but chose a questionable solution. I don't think rewriting everything in Rust is at all compatible with their feature timelines or severe shortages of systems programming talent.

In a rewrite you can smuggle in a quality lift

I had a memory management problem so I introduced GC/ref counting and now I have a non-deterministic memory management problem.

I was waiting for that comment :) Remember that everybody, eventually, calls into code written in C.

If 90% of the code I run is in safe rust (including the part that's new and written by me, therefore most likely to introduce bugs) and 10% is in C or unsafe rust, are you saying that has no value?

Il meglio è l'inimico del bene. Le mieux est l'ennemi du bien. Perfect is the enemy of good.


That is an unexpected interpretation. Use the best tool for the job, also factoring what you (and your org) are comfortable with.

Depends on which OS we are talking about.

I know a few where that doesn't hold, including some still being paid for in 2026.


It’s worse than that. Eventually everybody calls into code that hits hardware. That is the level that the compiler (ironically?) can no longer make guarantees. Registers change outside the scope of the currently running program all the time. Reading a register can cause other registers on a chip to change. Random chips with access to a shared memory bus can modify the memory that the comipler deduced was static. There be dragons everywhere at the hardware layer and no compiler can ever reason correctly about all of them, because, guess what, rev2 of the hardware could swap a footprint compatible chip clone that has undocumented behavior that. So even if you gave all you board information to the compiler, the program could only be verifiably correct for one potential state of one potential hardware rev.

Sure, but eliminating bugs isn't a binary where you either eliminate all of them or it's a useless endeavor. There's a lot of value in eliminating a lot of bugs, even if it's not all of them, and I'd argue that empirically Rust does actually make it easier to avoid quite a large number of bugs that are often found in C code in spite of what you're saying.

To be clear, I'm not saying that I think it would necessarily be a good idea to try to rewrite an existing codebase that a team apparently doesn't trust they actually understand. There are a lot of other factors that would go into deciding to do a rewrite than just "would the new language be a better choice in a vaccuum", and I tend to be somewhat skeptical that rewriting something that's already widely being used will be possible in a way that doesn't end up risking breaking something for existing users. That's pretty different from "the language literally doesn't matter because you can't verify every possible bug on arbitrary hardware" though.


The hardware only understand addresses and offsets, aka pointers :)

All the more reason to have memory safety on top.

If you're sufficiently stubborn, it's certainly possible to call directly into code written in Verilog, held together with inscrutable Perl incantations.

High-level languages like C certainly have their place, but the space seems competitive these days. Who knows where the future will lead.


If you want something extra spicy, there are devices out there that implement CORBA in silicon (or at least FPGA), exposing a remote object accessible using CORBA

You didn’t miss the smiley, did you? :)

I didn't miss the smiley =)

They could have started with simple Valgrind sessions before moving to Rust though. Massive number of agents means microservices, and microservices are suitable for profiling/testing like that.

Visual Studio has had quite some tooling similar to it, and you can have static analysis turned on all the time.

SAL also originated with XP SP2 issues.

Just like there have been toons of tools trying to fix C's flaws.

However the big issue with opt-in tooling is exactly it being optional, and apparently Microsoft doesn't enforce it internally as much as we thought .


> However the big issue with opt-in tooling is exactly it being optional,

That's true, and that's a problem.

> and apparently Microsoft doesn't enforce it internally as much as we thought .

but this, in my eyes, is a much bigger problem. It's baffling considering what Microsoft does as their core business. Operating systems high impact software.

> Visual Studio has had quite some tooling similar to it, and you can have static analysis turned on all the time.

Eclipse CDT, which is not capable as VS, but is not a toy and has the same capability: Always on static analysis + Valgrind integration. I used both without any reservation and this habit paid in dividends in every level of development.

I believe in learning the tool and craft more than the tools itself, because you can always hold something wrong. Learning the capabilities and limits of whatever you're using is a force multiplier, and considering how fierce competition is in the companies, leaving that kind of force multiplier on the table is unfathomable from my PoV.

Every tool has limits and flaws. Understanding them and being disciplined enough to check your own work is indispensable. Even if you're using something which prevents a class of footguns.


I think the core business of MSFT has always been — building a platform, grab everyone in and seek rent. Bill figured out from 1975 so it has been super successful.

OS was that platform but in Azure it is just the lowest layer, so maybe management just doesn’t see it, as long as the platform works and government contracts keep coming in. Then you have a bunch of yes-man engineers (I’m so surprised that any principle engineer, who should be financially free, could push out plans described by the author in this series) who gives the management false hopes.


One reason why Windows is a mess, is that Satya sees Azure as actually Azure OS, Windows version of OS/360.

Ideally everyone would be using it via services hosted there, with the browser or mobile devices as thin clients.

Just two months ago,

https://blogs.windows.com/windowsexperience/2026/02/26/annou...


It’s org-dependent. On Windows, SAL and OACR are kings, plus any contraption MSR comes up with that they run on checked-in code and files bugs on you out of the blue :) Different standards.

Did you miss the part that writes about the "all new code is written in Rust" order coming from the top? It also failed miserably.

That was quite interesting and now I will take another point of view of the stuff I shared previously.

However given how Windows team has been anti anything not C++, it is not surprising that it actually happened like that.


It came from the top of Azure and for Azure only. Specifically the mandate was for all new code that cannot use a GC i.e. no more new C or C++ specifically.

I think the CTO was very public about that at RustCon and other places where he spoke.

The examples he gave were contrived, though, mostly tiny bits of old GDI code rewritten in Rust as success stories to justify his mandate. Not convincing at all.

Azure node software can be written in Rust, C, or C++ it really does not matter.

What matters is who writes it as it should be seen as “OS-level” code requiring the same focus as actual OS code given the criticality, therefore should probably be made by the Core OS folks themselves.


I have followed it from the outside, including talks at Rust Nation.

However the reality you described on the ground is quite different from e.g. Rust Nation UK 2025 talks, or those being done by Victor Ciura.

It seems more in line with the rejections that took place against previous efforts regarding Singularity, Midori, Phoenix compiler toolchain, Longhorn,.... only to be redone with WinRT and COM, in C++ naturally.


May I ask, what kind of training does the new joins of the kernel team (or any team that effectively writes kernel level code) get? Especially if they haven't written kernel code professionally -- or do they ONLY hire people who has written non-trivial amount of kernel code?

> Do you prefer that we not even try to spread beyond our one planet, when an entire galaxy, or maybe even the neighboring ones, might be in reach if we try? What if someday at the very end of the lifetime of our sun and similar stars, we look back, and regret not trying?

The timeline you are speaking of is in billions of year. Yes, in that timescale, it definitely makes sense to try.

This very century, there are very serious scientific concerns about the continued comfortable habitability of Earth and the ensuing geopolitical instability caused by the accumulation of greenhouse gases in the atmosphere, the sixth mass extinction event, etc. We have solutions to mitigate those to some degree, but this requires very significant resource allocation to that goal, and so far it seems possible that we would fall short.

I don't think there should be zero resources allocated to space exploration, but it's at least reasonable to question whether we have our priorities set right.


For one thing, stablecoin issuers hold more than $100B of US treasury bills, on the same level as some major countries. For better or worse, the old and new systems are interconnected now.

https://www.brookings.edu/articles/the-rise-of-stablecoins-a...


$100B sounds like a lot of money to any sane human being, but for the T-Bill market it's really a drop in the ocean. Current T-Bill Market cap[1] is 29 Trillion give or take a little, so $100B is about 30bps of the total. Would nudge the market a little bit, but not that much.

[1] Here's my source and they should of course know https://fred.stlouisfed.org/series/MVMTD027MNFRBDAL


What "endless crap that is just not that useful" has been added to Rust in your opinion?

returning "impl Trait". async/await unpin/pin/waker. catch_unwind. procedural macros. "auto impl trait for type that implements other trait".

I understand some of these kinds of features are because Rust is Rust but it still feels useless to learn.

I'm not following rust development since about 2 years so don't know what the newest things are.


RPIT (Return Position impl Trait) is Rust's spelling of existential types. That is, the compiler knows what we return (it has certain properties) but we didn't name it (we won't tell you what exactly it is), this can be for two reasons:

1. We didn't want to give the thing we're returning a name, it does have one, but we want that to be an implementation detail. In comparison the Rust stdlib's iterator functions all return specific named Iterators, e.g. the split method on strings returns a type actually named Split, with a remainder() function so you can stop and just get "everything else" from that function. That's an exhausting maintenance burden, if your library has some internal data structures whose values aren't really important or are unstable this allows you to duck out of all the extra documentation work, just say "It's an Iterator" with RPIT.

2. We literally cannot name this type, there's no agreed spelling for it. For example if you return a lambda its type does not have a name (in Rust or in C++) but this is a perfectly reasonable thing to want to do, just impossible without RPIT.

Blanket trait implementations ("auto impl trait for type that implements other trait") are an important convenience for conversions. If somebody wrote a From implementation then you get the analogous Into, TryFrom and even TryInto all provided because of this feature. You could write them, but it'd be tedious and error prone, so the machine does it for you.


Like you said it is possible to not use this feature and it arguably creates better code.

It is the right tradeoff to write those structs for libraries that absolutely have to avoid dynamic dispatch. In other cases it is better to give a trait object.

A lambda is essentially a struct with a method so it is the same.

I understand about auto trait impl and agree but it is still annoying to me


> It is the right tradeoff to write those structs for libraries that absolutely have to avoid dynamic dispatch. In other cases it is better to give a trait object.

IMO it is a hack to use dynamic dispatch (a runtime behaviour with honestly quite limited use cases, like plugin functionality) to get existential types (a type system feature). If you are okay with parametric polymorphism/generics (universal types) you should also be okay with RPIT (existential types), which is the same semantic feature with a different syntax, e.g. you can get the same effect by CPS-encoding except that the syntax makes it untenable.

Because dynamic dispatch is a runtime behaviour it inherits a bunch of limitations that aren't inherent to existential types, a.k.a. Rust's ‘`dyn` safety’ requirements. For example, you can't have (abstract) associated types or functions associated with the type that don't take a magic ‘receiver’ pointer that can be used to look up the vtable.


It takes less time to compile and that is a huge upside for me personally. I am also not ok with parametric polymorphism except for containers like hashmap

Returning impl trait is useful when you can't name the type you're trying to return (e.g. a closure), types which are annoyingly long (e.g. a long iterator chain), and avoids the heap overhead of returning a `Box<dyn Trait>`.

Async/await is just fundamental to making efficient programs, I'm not sure what to mention here. Reading a file from disk, waiting for network I/O, etc are all catastrophically slow in CPU time and having a mechanism to keep a thread doing useful other work is important.

Actively writing code for the others you mentioned generally isn't required in the average program (e.g. you don't need to create your own proc macros, but it can help cut down boilerplate). To be fair though, I'm not sure how someone would know that if they weren't already used to the features. I imagine it must be what I feel like when I see probably average modern C++ and go "wtf is going on here"


> Reading a file from disk, waiting for network I/O, etc are all catastrophically slow in CPU time and having a mechanism to keep a thread doing useful other work is important.

curious if you have benchmarks of "catastrofically slow".

Also, on linux, mainstream implementation translates async calls to blocked logic with thread pool on kernel level anyway.


Impl trait is just an enabler to create bad code that explodes compile times imo. I didn’t ever see a piece of code that really needs it.

I exclusively wrote rust for many years, so I do understand most of the features fair deeply. But I don’t think it is worth it in hindsight.


Why would you worry about it if you can afford to replace it?

If you say you worry about the cost, shouldn't you worry even more about the higher cost of the insurance? Sure, for one item the variance is higher if you are uninsured, but if you have several such items, variance goes down, and you are saving all the more money.


Because even though I can afford to buy/repair a new phone if I break mine; it still _feels_ terrible to have to spend 500+ bucks because I was a dumbass.

I literally toss my phone to my couch or my bed from across the room dozens of times a week without worrying about misjudging the throw (which happens more than I’d like to admit), toss is on the ground at the gym, have no problems taking long baths with it, washing it under the sink if it gets dirty, and do dozens of things I would not do if I had to pay a full price if I ended up actually breaking it.

Having AC+, lets me treat the device with the level of carelessness that is worth the price to me.

Math-wise with how durable recent flagship devices are, you are probably correct that I’d be better off financially to just accept that I will break a phone every couple of years and just eat the cost.

But psychologically, I’m happier paying ~120bucks a year, than $500 in repair fees once in a while.


Yes, the argument is that the entity providing the insurance is surely earning more income that they are paying out since in addition to payouts, they also have overhead costs and must be profitable. Said another way, their customers are paying more than they receive, on average. That's a mathematical and economical certainty.

You are right that it might still feel better to you to pay regularly instead. That's subjective.

Knowing that you will likely end up paying less in the long term if you don't pay the insurance might help getting over that feeling, but that's a personal choice in the end.


It's bordering on insurance fraud and I usually trade-in my devices back to Apple so I don't bother with it; but there's probably at least one case where both you and Apple come out ahead financially.

AC+ includes what they call "Express Replacement Service", where you will send you an entirely new device as part of your claim, and they'll reuse your old one for parts.

If you _just happen_ to accidentally fall with your phone in hand right after the new ones come out, the delta in price between "a scuffed up, used 1-year old phone" and "brand new refurbished device from Apple" is higher than the price of the insurance and incidental damage fees.


Building a tech and falsely advertising it to be something else that what it is (e.g. self driving instead of driving assistance) can typically done by different people. Lacking specific evidence, it's reckless to accuse this person.


right. i'm mostly ignorant of the subject and rushing to judgement based on bias. but he did lead the computer vision team for years at tesla that created autopilot. didn't resign in protest and to my knowledge hasn't apologized, but again i'm ignorant and not seeking new data.


> It was a challenge to write routines that would keep the computer tolerably in tune, since the Mark II could only approximate the true pitch of many notes: for instance the true pitch of G3 is 196 Hertz but the closest frequency that the Mark II could generate was well off the note at 198.41 Hertz.

There are several notes that sounds significantly out of tune, a bit similar to a beginner violinist. Which is kind of poetic in a way. The first computer to play music (in 1951!) had not mastered it yet.


It’s truly fascinating that it was out of tune because of the similarities of the Mark II timing with sound itself .. but that also computing rapidly, rapidly started operating in a much higher frequency band and is capable these days of bending audio realities in other astonishing ways ..


Thanks, I did not know! On Firefox/Linux, it's Alt and dragging the mouse through the part of the text you want.


If it "only" speeds up DOM access, that's massive in itself. DOM is obviously a crucial element when running inside a browser.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: