Hacker Newsnew | past | comments | ask | show | jobs | submit | ntfAX's commentslogin

> But none of the tests actually test how fast can you learn

Some tests that actually test how fast can you learn are aptitude tests. I was given such tests by only two companies among hundreds of applications.


I need a simple OS that does things without frills.


ok


When memory safe programming language?


Here's Theo's response on integrating safe languages into openbsd, circa 2017: https://marc.info/?l=openbsd-misc&m=151233345723889&w=2

To summarize: He mentioned a lack base posix utilities in these languages (posix compliant utilities), and that changes like this take a long time (stack protector took 10 years to implement), and that these new toolchains would dramatically increase build times (like haskell cgrep vs openbsd grep), and that openbsd requires that base can build base on all platforms and some of these languages won't work on some supported platforms (there wasn't enough address space on i386 for rust to compile itself, for example).

I realize some of this may be dated by now, but I assume the above are the types of concerns they would want to address.


On OpenBSD? I don't think it will happen any time soon. The developers seem pretty happy with C and instead enforce a strict code style, commit reviews, and auditing. There was a user on the lists a while back also running some static analysis and submitting bugs (moon-something, sorry, it was years ago).


Conceivably when the current team is retired and a different group of people with a different set of priorities takes over.


A software solution provided by the OS or language can make this hardware solution irrelevant.


Windows does this in software, since approximately 8 years.

An advantage of the software solution is that you don't need to have the feature compiled into every library for it to work, you just lose protection in those parts. That makes for a much quicker rollout. Also faster iteration times, in the Windows Insider Preview you can get the extended version that also checks that the hashed function signature matches.

1: https://learn.microsoft.com/en-us/windows/win32/secbp/contro...


You've got it backwards: this hardware solution makes the software solutions irrelevant.


Nope. Here's the actual problem, in these crappy languages it's really easy for mistakes to result in a stack smash, so, these types of hacks aim to make it harder for the bad guys to turn that into arbitrary remote code execution. Not impossible, just harder. Specifically in this case the idea is that they won't be able to abuse arbitrary bits of function without calling the whole function, at a cost of some hardware changes and emitting unnecessary code. So maybe they can't find a whole function which works for them and they give up.

Using better languages makes the entire problem disappear. You don't get a stack smash, the resulting opportunities for remote code execution disappear.

It suggests that maybe the "C magically shouldn't have Undefined Behaviour" people were onto something after all. Maybe C programmers really are so wedded to this awful language that just being much slower than Python wouldn't deter them. There is still the problem that none of them can agree how this should work, but if they'll fund it maybe it's worth pursuing to find out how much they will put up with to keep writing C.


I think one could argue that all the software mitigations that aren't based on compile time proofs result in quite a bit more "emitting unnecessary code", if "unnecessary" is taken to mean "not strictly intrinsic to the task of the program". And undefined behavior is bad, but getting rid of it wouldn't be a silver bullet for this problem in C, I think. All undefined behavior could become "implementation defined" tomorrow, where the C compiler becomes more like a high-level assembler (again), and you could still jump the instruction pointer into arbitrary program text.


> All undefined behavior could become "implementation defined" tomorrow, where the C compiler becomes more like a high-level assembler (again), and you could still jump the instruction pointer into arbitrary program text.

Try to work this through in your head. Imagine how you need to specify the working of the abstract machine in order to allow this. How do we talk about an "instruction pointer" on the abstract machine? What are the instructions it's pointing to? Am I defining an entire bytecode VM?

Nah, instead you're going to do one of two things. One: "Undefined Behaviour" which we explicitly took off the table, or Two: "If this happens the program aborts". And with that the big problem evaporates. Does it make those C programmers happy? I expect not.


Implementation defined means the compiler must specify the behavior, but it has near total freedom, and it can define it specific to the target system. There is no abstract machine. If I use GCC on Linux x86-64, then there very much is an instruction pointer.


In the real world, compilers just specify that the behaviour is undefined and tell you to suck it up. But we're talking about a hypothetical where we aren't allowing Undefined Behaviour. Saying "Oh, but we can if we say it's the implementation choosing" is a get out which is meaningless for the hypothetical. Just refuse to engage with the hypothetical instead if you don't like it.


I'm using specific, standards defined language, that's relatively well known. For example, sizeof(int) is implementation defined, meaning it must have a documented definition, specific to the implementation (e.g., gcc x86_64-linux-gnu, it's 4).

In languages like C that are closer to the machine, not everything has to be specified strictly in terms of a generic abstract machine.

I'm not trying to be hostile or evasive or derisive, I'm just genuinely responding to your original comment, that I think missed on some important info. And my point was that if we imagine a different world from the real world we're in right now, where in this new world, all undefined behavior became implementation defined behavior, then there would still be a need for mitigations like endbr64. So I'm not painting a rosy picture for C. I just think undefined behavior is a red herring. Assembly doesn't have undefined behavior, but obviously you can have all sorts of issues there.


> Assembly doesn't have undefined behavior, but obviously you can have all sorts of issues there.

The machine is in the real world and is thus obliged to have some actual behaviour, but it is not always practical to discern what that behaviour would be let alone make it reliable across a product line and document it in an understandable way. As a result actually your CPU's documentation does in effect include "Undefined Behaviour".


True, when writing my comment I wanted to qualify it to the same effect, but thought it would be an unnecessary subtlety to the general thrust of my point. That is, we can ignore this kind of "undefined behavior in the machine itself" for the purposes of this particular discussion.


I don't see how to ignore it though. If we're defining the behaviour but then our "definition" just doesn't specify the actual behaviour because it's specified in terms of hardware with no clearly defined behaviour for that situation then it's just word play, we're not really doing what I set out.


If for the purposes of this discussion we can't ignore it at the machine level (because we're assuming higher level languages, crappy or otherwise, are unlikely to generate machine instructions that exhibit undefined behavior), then why were we discussing higher level languages and their crappiness at all? I'm not saying this to be snarky, I just mean that I really think the likelihood of machine undefined behavior being an issue is on the order of likelihood for cosmic rays to flip bits -- happens, and can't be ignored (buy ECC memory), but more interesting to talk about the things that we are many orders of magnitude more likely to experience, e.g., bugs in C programs, bugs in unsafe Rust, bugs in managed language runtimes, etc. I think those things are not all equally likely, but could all benefit from endbr64 type mechanisms, including in JIT output.

To be clear, unlike the comment root, I don't think this particular hardware mechanism obviates the need/benefits of related software mechanisms. But in terms of cost/benefit/applicability, endbr64 type mechanisms look pretty good all around.


I’m always amused by how many of OpenBSD’s mitigations are patching over something as basic as lack of bounds checking, yet they’ll never add bounds checking. And, as you said, those are all just speed bumps, not fixes.


It's only irrelevant if the hardware solution is available on all the supported architectures/systems. As long as it's not, the software version must be maintained anyway, and might suffer from bitrot if it's no longer exercised on the major architectures.


Build something like this https://stacker.news


This is a cool idea.


One thing to try is disagree, put down "I told you so" collateral, and commit. If the party you disagreed with doesn't produce the expected result then they should lose something. Cost is a forcing function; without it there are no consequences.


It might be a combination of efficiency and generating value. If you're efficient enough to generate $1000/hr but the org has no opportunity or product for you that can generate this, efficiency in performance metrics alone doesn't move the company's bottom line.


Exactly, the word work was used as a simplification of the overall output expected. You also need to account for the hard parts to calculate, human burnout, turnover rate, time to deliver a product to the market, amount of rework, tech debt added, etc... Someone who can deliver a fancy project, full of tech debt, will require 5x more effort on rework than was to do it, and the team gets burned out severely after the project. Would it be worth it? In the short term senior management will see it as a good idea. But most managers and senior managers are too naive to account for these hidden variables, or in most cases, not interested to take these metrics into account


Good job on building something cool and putting it in front of people.

The biggest flaw I see is that it generates marketing-speak. It fails to answer well the most important question.

Here's what it gave me:

"Beddit is developing an, engaging online forum that will revolutionize digital communities via structured discussions, dynamic networking and unique engagement tools. Focused on aligning the often chaotic world of online networking, Beddit offers users a vibrant and organized platform for connecting, learning, and growing. Key functionalities include advanced topic filters, an interactive voting system for significant contributions, as well as real-time discussion monitoring for maintaining quality discussions. As we further enhance our product, Beddit aims to incorporate machine learning for personalized user experiences, AI moderation, and predictive analytics to anticipate trending topics, creating the future of online forums."

Here's something better I wrote:

"We'll make a better Reddit. Users visit a website to post links and comments and we save them in a database. A background process ranks news based on number of views and puts them on the front page."

GPT-4s response doesn't say how the site works. The description above does.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: