Hacker Newsnew | past | comments | ask | show | jobs | submit | jcranmer's commentslogin

> I wonder if C++ has some hairy concepts and syntax today on par with Rust's more difficult parts.

… … … … Unqualified name lookup has been challenging in C++ since even before C++11. Overload resolution rules are so painful that it took me weeks to review a patch simply because I had to back out of trying to make sense of the rules in the standard. There's several slightly different definitions of initialization. If you really want to get in the weeds, starting playing around with std::launder and std::byte and strict aliasing rules and lifetime rules, and you'll yearn for the simplicity of Rust.

C++ is the absolute most complex of any of the languages whose specifications I have read, and that's before we get into the categories of things that the standard just gives up on.


> starting playing around with std::launder and std::byte and strict aliasing rules and lifetime rules, and you'll yearn for the simplicity of Rust

Annotations like std::launder, lifetime manipulation, etc solve a class of problems that exist in every systems language. They inform the compiler of properties that cannot be known by analyzing the code. Rust isn't special in this regard, it has the same issues.

Without these features, we either relied on unofficial compiler-specific behavior or used unnecessarily conservative code that was safe but slower.


> Rust isn't special in this regard, it has the same issues.

This is both fundamentally true and misleading. Rust has to solve the same issues but isn't obliged to make all the same bad choices to do that and so the results are much better.

For example C++ dare not perform compile time transmutations so, it just forbids them and a whole bunch of extra stuff landed to work around that, but in Rust they're actually fine and so you can just:

    const FOO: bool = unsafe { core::mem::transmute::<i8, bool>(2) };
That blows up at compile time because we claimed the bit pattern for the integer 2 is a valid boolean and it isn't. If we choose instead 0 (or 1) this works and we get the expected false (or true) boolean instead of a compiler diagnostic.

C++ could allow this but it doesn't, rather than figure out all the tricky edge cases they just said no, use this other new thing we made.


> For example C++ dare not perform compile time transmutations

I am confused by this assertion. You can abuse the hell out of transformations in a constexpr context. The gap between what is possible at compile-time and run-time became vanishingly small a while ago.

I think your example is not illustrative in any case. Many C++ code bases work exactly like your example, enforced at compile-time. That this can be an issue is a hangover from retaining compatibility with C-style code which conflates comparison operators and cast operators. It is a choice.

C++ can enforce many type constraints beyond this at compile-time that Rust cannot, with zero effort or explicit type creation. No one should be passing ints around.


First of all mem::transmute is like bit_cast (which works perfectly fine in constexpr context), not reinterpret cast.

Second, this compiles just fine:

   constexpr int ivalue = 1;
   constexpr bool bvalue {ivalue};
This fails at compile time (invalid narrowing):

    constexpr int ivalue = 2;
    constexpr bool bvalue {ivalue};
Note we don't need bit_cast for this example as int to bool conversions are allowed in C++.

Surely "We have many different ways to do this, each with different rules" is exactly the point? C++ 20's std::bit_cast isn't necessarily constexpr by the way although it is for the trivial byte <-> boolean transmutation I mentioned here.

I see that C++ people were more comfortable with the "We have far too many ways to initialize things" examples of this problem but I think transmutation hits harder precisely because it sneaks up on you.


bit_cast and reinterpret_cast do different things: one works at the value level, the second preserves address identity (and it is problematic from an aliasing point of view).

Not sure what any of this has to do with initialization though.

FWIW, the direct translation of your rust code is:

    constexpr char y = 2;
    constexpr bool x = std::bit_cast<bool>(y);
It fails on clang for y=2 and works for y=1, exactly like rust;

GCC produces UB for y=2, I don't know if it is a GCC bug or the standard actually allows this form of UB to be ignored at contexpr time.

What is the rust equivalent of reinterpret_cast and does it work at constexpr time?

edit: I guess it would be an unsafe dereference of a casted pointer. Does it propagate constants?


Firstly, that's not a direct translation because you're making two variables and I made none at all. Rust's const is an actual constant, it's not an immutable variable. We have both, but they're different. The analogous Rust for your bit cast example would make two immutable variables that we promise have constant values, maybe:

    static y: u8 = 2;
    static x: bool = unsafe { core::mem::transmute(y) };
Of course this also won't compile, because the representation for 2 still isn't a boolean. If it did compile you'd also (by default) get angry warnings because it's bad style to give these lowercase names.

I also don't know if you found a GCC bug but it seems likely from your description. I can't see a way to have UB, a runtime phenomenon, at compile time in C++ as the committee imagines their language. Of course "UB? In my lexer?" is an example of how the ISO document doesn't understand intention, but I'd be surprised if the committee would resolve a DR with "That's fine, UB at compile time is intentional".

I understand that "these are different things" followed by bafflegab is how C++ gets here but the whole point of this sub-thread is that Rust didn't do that, so in Rust these aren't "different things". They're both transmutation, they don't emit CPU instructions because they happen in the type system and the type system evaporates at runtime.

So this is an impedance mismatch, you've got Roman numerals and you can't see why metric units are a good idea, and I've got the positional notation and so it's obvious to me. I am not going to be able to explain why this is a good idea in your notation, the brilliance vanishes during translation.


In my experience conversions is one of the things that maximum warning levels do excellent static analysis for nowadays. In the last 15 years I hardly had a couole problems (init vs paren initialization). All narrowing etc. is caught out of the box with warnings.

I'm not sure what you're getting at but

const bool z = (const bool)((int8_t)2);

Is perfectly valid C++.


That's a conversion, not the same. The naive equivalent to transmute would be

    int8_t x = 2;
    bool y = *reinterpret_cast<bool *>(&x);
But reinterpret_cast isn't valid in a constexpr scope.

My point is, in your exact example both reinterpret_cast and C-style casts have the exact same behavior, making the example bad. If you want to showcase a deficiency of C++, it would make sense to pick something where the difference between cast types actually matters.

> But reinterpret_cast isn't valid in a constexpr scope.

std::bit_cast is


Oh cool, and it behaves like memcpy, not like pointer aliasing! I'm stuck with C++14 at work so I missed that one.

The right strategy to use C++ efficiently is to set warnings to the maximum as errors and take the core guidelines or similar and avoid past cruft.

More often than not (except if you inherit codebases but clang has a modernize tool) most of the cruft is either avoidable or caught by analyzers. Not all.

But overall, I feel that C++ is still one of the most competitive languages if you use it as I said and with a sane build system and package manager.


Unless you're writing a compiler, you should require the author of the patch to explain why it works.

This was a patch for the compiler implementation of the changes to the standard.

Certificate revocations are not required to be reported after the expiration date, so you can no longer reliably check if a certificate has been revoked (e.g., because its underlying key was exfiltrated or because it was misissued).

Honestly, the Fourth Crusade in 1204 was more of a "real" death of the Byzantine Empire than the conquest of Constantinople in 1453. Although the largest remnant of the Byzantine Empire was able to reconquest Constantinople in 1261, the city's population never recovered (it went from ~400k in 1204 to ~50k in 1453). The 14th century saw it riven with a series of civil wars, which the Ottomans used to expand their foothold into the remnants of the Ottoman Empire. By 1453, Constantinople was unable to really defend itself without garrison from the major European states like Hungary and Venice, and Mehmet II was able to conquer the city before those states could get their forces sent out.

I've been trying out AI over the past month (mostly because of management trying to force it down my throat), and have not found it to be terribly conducive to actually helping me on most tasks. It still evidences a lot of the failure modes I was talking about 3 years ago. And yet the entire time, it's the AI boosters who keep trying to say that any skepticism is invalid because it's totally different than how it was three months ago.

I haven't seen a lot of goalpost moving on either side; the closest I've seen is from the most hyperbolic of AI supporters, who are keeping the timeline to supposed AGI or AI superintelligence or whatnot a fairly consistent X months from now (which isn't really goalpost-moving).


And Robocop, Terminator, Captain Kirk and Darth Vader. Lo-Pan, Superman, every single Power Ranger.

And Bill S. Preston, Theodore Logan, Spock, The Rock, Doc Ock, and Hulk Hogan.


> How big a share of the desktop market do the BSDs have compared to Linux? I imagine it’s quite small, unfortunately.

Good stats are hard to come by, but the Linux : BSD ratio is probably no larger than the Windows : Linux ratio (which is actually running relatively low these days--Linux seems to be closing in on ~3% desktop share). That puts the BSD overall in the 0.01% range, which is really too little market share to accurately measure.


There are three main problems with trying to offer a simple answer to the question of "what is the first computer?"

The most obvious of the problems is that a computer isn't a singular technology that springs up de novo, but something that develops from antecedents over a long, messy transition problem that requires a judgement call as to when the proto-computer becomes an actual computer. A judgement call which is obviously going to be biased based on the other considerations. Consider, for a more contemporary example, what you would argue as the "first smartphone" or the "first LLM." Personally, I think the ENIAC is still somewhat too proto-computer for my tastes: I'd prefer a "first" that uses binary arithmetic and has stored programs, neither of which is true for the ENIAC.

The second major issue is it's also instructive to look at the candidates' influence on later development. Among the contenders for "first computer," it's unfortunately kinda clear that ENIAC has the most lasting influence. ENIAC's development produced the papers that directly inspires the next generation of machines. Colossus is screwed here because of the secrecy of the code-breaking effort. Meanwhile, Zuse and Z3 suffer from being on the losing end of WW2. ABC has a claim here, but it's not clear whether or not the developers of ENIAC drew influence from ABC or not.

The final major issue isn't so much an issue by itself but rather something that colors the interpretation of the first two issues: national pride. An American is far more likely to weight the influence and ingenuity of the ENIAC and similar machines to label one of them the "first computer." A UK person would instead prefer to crown Colossus or the Manchester Baby. A German would prefer the Z3.


In many ways the ENIAC was more like an FPGA than a computer. It was programmed with patch cables connecting the different computational units as well as switches, and had no CPU as such. The cables had to be physically rerouted when changing to a new program, which took weeks. My understanding is that it was eventually programmed to emulate a von Neuman machine around 1948/49. As far as I understand, this was done mainly by Jean Bartik based on Von Neumans ideas.

If this is correct, it was not a von Neuman machine originally, but it eventually became one, and at approximately the same time as the Manchester Baby.


> Why would anyone create a new language now?

I'm writing my own programming language right now... which is for an intensely narrow use case, I'm building a testbed for comparing floating-point implementations without messy language semantics getting in the way.

There's lots of reasons to write your own programming language, especially since if you don't care about it actually displacing existing languages.


> The back half of winter was characterized by blackened, salt-saturated puddles and banks. I wonder if the prevalence of EVs has made things less dirty in the winter.

The dominant cause of that is probably brake and tire particulate matter, not car exhaust. And EVs make tire pollution go up (because they're heavier) and brake pollution... I'm not sure if the weight effect there is counteracted by the decreased amount of friction brake use (as opposed to resistance braking).


On my Polestar 2, I was surprised how in actual use, friction braking was basically zero - to the point where when you start a trip the brakes are used for a few seconds to make sure they're still working (and scrub them a bit.) In actual driving - without trying particularly on my part - it's just always regen.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: