Hacker Newsnew | past | comments | ask | show | jobs | submit | deathanatos's commentslogin

> I guess there's also a difference between "can use checks" vs "have to use checks" because, aside from rent, I can't recall having to write checks.

Landlords, IME, insist on a physical check for the first payment. I think they're performing some sort of blood ritual with it in the back of the office. After the sacrifice is complete, though, they'll switch to ACH.

The only other place I've ever had to use checks is for large purchases, where the amount exceeds that which cards are capable of. Even these would be pretty rare for most people, since there's a likelihood you would finance a large purchase with a loan instead.


I think my last car purchase I paid deposit by card but paid the balance by personal check. In years past that balance would have been by a cashier check but I guess systems these days can confirm there's money in the account.

> One dreaded and very common situation is when a failing CI run can be made to pass by simply re-running it. We call this flaky CI.

> Flaky CI is nasty because it means that a CI failure no longer reliably indicates that a mistake was caught. And it is doubly nasty because it is unfixable (in theory); sometimes machines just explode.

> Luckily flakiness can be detected: Whenever a CI run fails, we can re-run it. If it passes the second time, we are sure it was flaky.

One of the specialties that I have (unwillingly!) specialized in at my current company is CI flakes. Nearly all flakes, well over 90% of them, are not "unfixable", nor are they even really some boogy man unreliable thing that can't be understood.

The single biggest change I think we made that helped was having our CI system record the order¹ in which tests are run. Rerunning the tests, in the same order, makes most flakes instantly reproduce locally. Probably the next biggest reproducer is "what was the time the test ran?" and/or running it in UTC.

But once you get from "it's flakey" (and fails "seeming" "at" "random") to "it fails 100% of the time on my laptop when run this way" then it becomes easier to debug, b/c you can re-run it, attach a debugger, etc. Database sort issues (SQL is not deterministically ordered unless you ORDER BY), issues with database IDs (e.g., test expects row ID 3, usually gets row ID 3, but some other test has bumped us to row ID 4²), timezones — those are probably the biggest categories of "flakes".

While I know what people express with "flake", "flake" as a word is usually "failure mode I don't understand".

(Excluding truly transitory issues like a network failure interfering with a docker image pull, or something.)

(¹there are a lot of reasons people don't have deterministically ordered CI runs. Parallelism, for example. Our order is deterministic, b/c we made a value judgement that random orderings introduce too much chaos. But we still shard our tests across multiple VMs, and that sharding introduces its own changes to the order, as sometimes we rebalance one test to a different shard as devs add or remove tests.)

²this isn't usually because the ID is hardcoded, it is usually b/c, in the test, someone is doing `assert Foo.id == Bar.id`, unknowningly. (The code is usually not straight-forward about what the ID is an ID to.) I call this ID type confusion, and it's basically weakly-typed IDs in langs where all IDs are just some i32 type. FooId and BarId types would be better, and if I had a real type system in my work's lang of choice…


A fairly large category of the flaky CI jobs I see is "dodgy infrastructure". For instance one recurring type for our project is one I just saw fail this afternoon, where a gitlab CI runner tries to clone the git repo from gitlab itself and gets an HTTP 502 error. We've also had issues with "the s390 VM that does CI job running is on an overloaded host, so mostly it's fine but occasionally the VM gets starved of CPU and some of the tests time out".

We do also have some genuinely flaky tests, but it's pretty tempting to hit the big "just retry" button when there's all this flakiness we can't control mixed in there.


> those are probably the biggest categories of "flakes".

Interesting. In my experience, it is always either a concurrency issue in the program under test or PBTs finding some extreme edge case that was never visited before.


We are using very different versions of JIRA.

The cloud version I used was slow and riddled with bugs. Entire views sometimes just refused to load or render, or something.

Did it "get the job done?" Yes, in a literal reading of those words, I suppose it did, but anyone who understands the amount of work that a modern 2.4 GHz CPU should be able to do per unit time would not think highly of it.

Nowadays … my company uses Linear … which, while it does have a sleeker, more modern looking UI … is nobody able to make a good bug tracker?


> Unfortunately, it is often used as a killer argument to chop up a software system beyond all recognition.

In all my years, all the companies and codebases I have ever seen, I have literally never seen a system chopped up this way by the SRP. The number of monoliths, though, are uncountable. The number of modules where some function is trying to juggle 8 different tasks, all conflicting … this is the constant state of affairs of actual code in the industry.

The SRP is not about chopping code up into (literal!) single lines; as I (and many others — this isn't some unique thought of mine) the SRP is about semantic, not syntactic — responsibility. It is fine, within the SRP, to have syntactically identical functions, if they serve semantically different purposes. "Are these two functions/sections of code bound to be the same by the law of physics?" If "no" … it's fine if there's a little copy paste here and there. Copy B's requirements might change down the line, and coalescing them into a single copy would be pain later down the line.

Drink with moderation.

> The big advantage of a group 0 component is that you can consume it within components from any other group (like blood type 0 (sic) can be received by any other type).

sigh. The blood type is "type O". Though, I do like the A/T separation, and yeah, generic-ish things become away of specific logic is usually a smell. (Though I'd love, like, some thoughtful reasoning. It resonates with me … but maybe a "why?".)


Even just within the Thinkpad lineup, their website is a mess. Let's even restrict ourselves to just T series Thinkpads.

First, the page looks like it misrenders with garish, inverse-color boxes breaking the apparent margin of the page. Then we get to the models:

  * ThinkPad T14s Gen 6 (14" Snapdragon) Laptop
  * ThinkPad T16g Gen 3 (16" Intel) Laptop
  * ThinkPad T1g Gen 8 (16" Intel) Laptop
  * ThinkPad T14 Gen 6 (14" AMD) Laptop
  * ThinkPad T14s 2-in-1 (14" Intel) Laptop
… that's just the first row. There are 17 items shown. Mostly it's just a poor presentation: there's ~3-4 actual lines, and the rest of what's show is combinatorical complexity of the various ways you can customize them. It's a crapshoot of a presentation.

The builds themselves seem worse now than they have before: they're overall more expensive for what you're getting vs. a few years ago. E.g., the GPU is … gone? They're all iGPUs now. They include a "45%NTSC" screen by default, which is something I've never heard of, and I thought sRGB was the literal bottom of the barrel, but I guess we can go deeper. The warranty is pathetic, but so too is Apple's.

You are right, you can get them without Windows now.


… needing to pay for postage hardly stops the spam I receive in my own mail. Even the most trivially absurd stuff, like "install rooftop solar" — I don't own a roof.

Mass mailing usually gets a hefty bulk discount.

Why wouldn't you spread out, though, instead of working in basically a line? (At least, as much as topography reasonably allows.) That way, your travel distance to any particular item increases at like sqrt(stuff), instead of just linearly.

yeah, I've been thinking about that since I read the article!

I'm wondering if the line goes along the crest of the hill, so it's basically as wide as the crest is. But there's still, why 7-8 holes wide, and why are there some groups... lots of questions to think about!


I think even having references that aren't necessarily null is only part of it. Image that your language supports two forms of references, one nullable, one not. Let's just borrow C++ here:

  &ref_that_cannot_be_null
  *ref_that_can_be_null
The latter is still a bad idea, even if it isn't the only reference form, and even if it isn't the default, if it lets you do this:

  ref_that_can_be_null->thing()
Where only things that are, e.g., type T have a `thing` attribute. Nulls are "obviously" not T, but a good number of languages' type system which permit nullable reference, or some form of it, permit treating what is in actuality T|null in the type system as if it were just T, usually leading to some form of runtime failure if null is actually used, ranging from UB (in C, C++) to panics/exceptions (Go, Java, C#, TS).

It's an error that can be caught by the type system (any number of other languages demonstrate that), and null pointer derefs are one of those bugs that just plague the languages that have it.


TypeScript actually supports nulls through type unions, exactly as Hoare suggests. It will not let you derefence a possibly-null value without a check.

C# also supports null-safety, although less elegantly and as opt-in. If enabled, it won’t let you deference a possibly-null reference.


> If enabled, it won’t let you deference a possibly-null reference.

It will moan, it doesn't stop you from doing it, our C# software is littered with intentional "Eh, take my word for it this isn't null" and even more annoying, "Eh, it's null but I swear it doesn't matter" code that the compiler moans about but will compile.

The C# ecosystem pre-dates nullable reference types and so does much of our codebase, and the result is that you can't reap all the benefits without disproportionate effort. Entity Framework, the .NET ORM is an example.


You can certainly set unsafe null dereferencs to be a compiler error in C#. It is just not the default for reasons of backwards compatibility.

Add WarningsAsErrors for prject and you are done.

> what ensures that ‘// used a suboptimal sort because reasons’ is updated during a global refactor that changes the method?

And for that matter, what ensures it is even correct the first time it is written?

(I think this is probably the far more common problem when I'm looking at a bug, newly discovered: the logic was broken on day 1, hasn't changed since; the comment, when there is one, is as wrong as the day it was written.)


But, you've still got an idea of why things were done the way they were - radio silence is....

Go ask Steve, he wrote it, oh, he left about 3 years ago... does anyone know what he was thinking?


Python async tasks can be cancelled. But, I don't think you can attach must context to the cancel (I think you can pass a text message), so it would seem the argument of what go suffered from would apply.

(I also think there's some wonkiness with and barriers to understanding Python's implementation that I don't think plagues Go to quite the same extent.)


The current meta is to use task groups and bubble up the exception that cancelled the coroutine/task.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: