Hacker Newsnew | past | comments | ask | show | jobs | submit | eru's commentslogin

Author should switch the keyboard to Dvorak. Gives more interesting gibberish when mashed.

Why even use pools? Ask your actuary about risks.

> This comes up now as “is vibecoding sane if LLMs are nondeterministic?” Again: do you want the CS answer, or the engineering answer?

Determinism would help you. With a bit of engineering, you could make LLMs deterministic: basically, fix the random seed for the PRNG and make sure none of the other sources of entropy mentioned earlier in the article contribute.

But that barely impact any of the issues people bring up with LLMs.


If you've got a fixed GPU that doesn't degrade at all during the process, I think? If you switch GPUs (even another one of the same model) or run it long enough the feed-forward of rounding will produce different results, right?

Why would rounding not be deterministic?

The rounding itself is, but the operations leading to what gets rounded are not associative and the scheduling of the warps/wavefronts isn't guaranteed.

Thanks!

And determinism isn’t particularly helpful with compilers. We expect adherence to some sort of spec. A compiler that emits radically different code depending on how much whitespace you put between tokens could still be completely deterministic, but it’s not the kind of tool we want to be using.

Determinism is a red herring. What matters is how rigorous the relationship is between the input and the output. Compilers can be used in automated pipelines because that relationship is rigorous.


The problem you pointed out is real, but determinism in compilers is still useful!

Suppose you had one of those widely unstable compilers: concretely if you change formatting slightly, you get a totally different binary. It still does the same thing as per the language spec, but it goes about it in a completely different way.

This weak determinism is still useful, because you can still get reproducible builds. Eg volunteers can still audit eg debian binary packages by just re-running the compiler with the exact same input to check that the output matches. So they can verify that no supply chain attack has fiddled with the binaries: at least the binaries belong to the sources the are claimed to.


It’s useful but far less important. This is easily seen by the fact that we went decades building lots of software without reproducible builds.

If you go further back in history, we went even longer without building any software!

No clue whether that's a good argument.


What.

The argument is that determinism in compilers isn't particularly important for building software because we did without it for a long time.

Your argument would be... that building software isn't particularly important for building software...?

The actual argument you'd be making would be something like, building software isn't particularly important for survival. Which is pretty obviously true, for the reason you state.


And the reason that relationship can be regiorous is because compilers by definition translate one formal language to another. You can’t have a compiler that translates English to machine code in a rigorous, repeatable manner because English is ambiguous.

It's a bureaucracy like any other.

If bad actors learn about the fail-close, they can conceivably cause you more harm.

This is a losing money vs. losing freedom situation.

Maybe. But for a company everything is fungible.

Also in eg C code, many exploits start out would only be a DoS, but can later be turned into a more dangerous attack.

If you're submitting a CVE for a primitive that seems likely to be useful for further exploitation, mark it as such. That's not the case for ReDOS or the vast majority of DoS, it's already largely the case that you'd mark something as "privesc" or "rce" if you believe it provides that capability without necessarily having a full, reliable exploit.

CVEs are at the discretion of the reporter.


Maybe. But at least everyone being on the same (new) version makes things simpler, compared to everyone being on different random versions, of what ever used to be current when they were written.

> The kicker is that updating the dependencies probably just introduces new CVEs to be discovered later down the line because most software does not backport fixes.

I don't understand how the second part of that sentence is connected to the first.


I could have written it more clearly. If you’re forced to upgrade dependencies to the latest version to get a patch, the upgrade likely contains new unrelated code that adds more CVEs. When fixes are backported you can get the patch knowing you aren’t introducing any new CVEs.

Dvorak works really well for me. (Though you might want to pick Colemak or Neo2 these days.) I use Dvorak on both my Kinesis Advantage and on 'normal' keyboards like on a laptop.

It's not so much about speed, as about comfort.


My personal experience after switching to Colemak is mostly neutral. Speed is about the same after some training, around 70 WPM. Comfort, maybe improved a bit, but no life changing.

Some people claim that they went from 60 WPM on Qwerty to over 100 WPM on some other newly designed layout, but my experience is clear: if you do it for the speed you will be disappointed.


I'd guess the speed improvement in those cases likely came from learning a better technique, like touch typing and using more of your fingers. Afaik a lot, if not most, of the fastest typists are still on qwerty.

I definitely learned to touch type with Dvorak, because I couldn't look at the keyboard anymore to get help.

On my Kinesis Advantage it's a lot more than four keys. And they definitely help.

The 12 thumb keys on the Kinesis is quite a luxury. I have:

Left hand: control, meta, command, hyper, super, backspace

Right hand: space, enter, command, hyper, super, del


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: