Hacker Newsnew | past | comments | ask | show | jobs | submit | thelittlenag's commentslogin

And if you are that kind of person, why would you want to be an FDE? (Not rhetorical, btw, I'm literally interviewing for an FDE role tomorrow.)

Same. I worked on an in-house product many years ago now where lineage and provenance were the entire point. Really cool to see this!

Thank you!

I used to work for the company that owned and operated GovDeals. That was a really interesting experience. I remember one of their many companies host the listing for and processed the sale of an oil refinery (or something similar) just like something random on ebay, except this sale was many 10's of millions of dollars.


I don't really like this article. There isn't anything particularly noteworthy to noticing that some computations have outcomes that allow some form of recovery, and other outcomes do not.

But there are some obvious follow up questions that I do think need better answers:

Why is recovery made so hard in so many languages?

Error recovery really feels like an afterthought. Sometimes that's acceptable, what with "scripting" languages, but the poor ergonomics and design of recovery systems is just a baffling omission. We deserve better options for this type of control flow.

Also, why do so many languages make it so hard to enumerate the possible outcomes of a computation?

Java tried to ensure every method would have in its signature how it could either succeed or fail. That went so poorly we simply put everything under RuntimeException and gave up. Yet resilient production grade software still needs to know how things can fail, and which failures indicate a recoverable situation vs a process crash+restart.

Languages seem to want to treat all failures as categorically similar, yet they clearly are not. Recovery/retry, logging, and accumulation all appear in the code paths production code needs to express when errors occur.

Following programming language development the only major advancements I've noticed myself have been the push to put more of the outcomes into the values of a computation and then further use a type system to constrain those values. That has helped with the enumeration aspect, leaving exceptions to mainly just crash a system.

The other advancement has been in Algebraic Effects. I feel like this is the first real advancement I've observed. Yet this feature is decried as too academic and/or complex. Yes, error handling is complex and writing crappy software is easy.

Maybe AI will help us get past the crabs at the bottom of the bucket called error handling.


My father is a data point in this. He was a farmer all his life and ultimately it was Parkinson's that did him in. While we took some precautions, I have no doubt that the herbicides we used should have been handled more carefully.


Sorry for your loss - it's a terrible disease.

My mother is also data point - grew up on a farm where her father used it. She was diagnosed with Parkinson's 2018.


I've really liked interviews where I either present a personal project I've worked on, or get to interview someone about their own personal projects. It's just more fun.

Their major complaint of the project approach is not getting signal on adaptability to new codebases. That has never been a first concern at any company I've worked at, and frankly if engineers are touching a new codebase every month then I'm getting a bit worried.


I've now done probably close to 100 system design interviews. One of the main things I've looked for in candidates is their ability to identify, communicate, and discuss trade-offs. The next thing on my checklist is their ability to move forward, pick an option, and defend that option. Really nimble candidates will pivot, recognizing when to change approaches because requirements have changed.

The goal here is to see if the candidate understands the domain (generic distributed systems) well enough on their own. For more senior roles I look to make sure they can then communicate that understanding to a team, and then drive consensus around some approach.


> For more senior roles I look to make sure they can then communicate that understanding to a team, and then drive consensus around some approach.

This is why I’m stuck at Senior lol. I can craft incredibly deep technical documents on why X is the preferred path, but when inevitably someone else counters with soft points like DX, I fall down. No, I don’t care that the optimal solution requires you to read documentation and understand it, instead of using whatever you’re used to. If you wanted to use that, why did you ask me to do a deep-dive into the problem?


I'm old enough to recall when boost first came out, and when it matured into a very nice library. What's happened in the last 15 years that boost is no longer something I would want to reach for?


C++11 through 17 negated a lot of its usefulness - the standard library does a lot of what Boost originally offered.

Alternative libraries like QT are more coherent and better thought out.


Qt is... fine... as long as you're willing to commit and use only Qt instead of the standard library. It's from before the STL came out, so the two don't mesh together really at all.


In my experience I've had no issues. Occasionally have to use things like toStdString() but otherwise I use a mix of std and qt, and haven't had any problems.


That's basically what I mean. You have to call conversion functions when your interface doesn't match, and your ability to use static polymorphism goes down. If the places where the two interact are few it works fine, but otherwise it's a headache.


I use boost and Qt but completely disagree. Every new version of boost brings extremely useful libraries that will never be in std: boost.pfr was a complete game changer, boost.mp11 ended the metaprogramming framework wars, there's also the recently added support for MQTT, SQL, etc. Boost.Beast is now the standard http and websocket client/server in c++. Boost.json has a simple API and is much more performant than nlohmann. Etc etc.


Jon has addressed this elsewhere, but the gist of the argument, as I understand it, is that he hasn't worked professionally in any other ecosystem or language. So leaving Scala is tantamount to abandoning his entirely professional experience (20+ years!), skill set, and all open source contributions, and then restarting from scratch in a new ecosystem. All without any guarantee that the allegations around him won't just follow him. Its a really tough position to be in.


I believe there is at least one other thing I got from the post: that he shouldn't have to abandon Scala, perhaps because doing so is to give in to a sort of injustice (in his mind)?


My comment here is a very narrow one. In general I agree with your sentiment and thoughts, so please don't misread me. There is one nit I need to pick, however.

There is a subtle, but worthwhile, difference between "plausible" and "credible". Lots of stories are plausible. Few are credible.

In emotion laden cases like this we tend to want to believe stories we already agree with, or have some investment in. I'm no exception to that.

We need to not be misled by what is plausible, or confuse that with what is credible.


Very interesting point. You're right there is a difference and the difference is subtle. I agree with what you're implying: the accusers stories are plausible. Credibility requires more information.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: