Instead of "I understand the causal mechanism and can predict what happens if I change X," you get something more like "I have a sufficiently rich model that I can simulate what happens if I change X, with probabilistic confidence." The answers are distributions, not deterministic outputs. That's a different kind of knowing.
At the beginning this sounded like, "hard problems are complex, machine learning can help us manage complexity, therefore we will be able to solve hard problems with machine learning", which betrays a shallowness of understanding. I think what this essay argues here is a little deeper than that trite tech-bro hype meme.
But I disagree with this conclusion: I don't know that we can begin to build these models to begin with or that our new LLM/transformer-powered tools can help solve these problems. If simulation were the answer to everything, why will new ML tools make a significant difference in ways that existing simulation tools do not?
Stuff like AlphaFold is amazing—I'm not saying that better medical results won't come about from ML—but I feel like there's some substance missing and that even this level of excitement that the author expresses here needs more and better backing.
Grad student here. The paper-reading experience on an iPad is vastly superior to a laptop, and I've got an aging iPad Gen 8 that doesn't have enough storage to upgrade. I run the Zotero iOS app and it's absolutely perfect for annotating papers and keeping my bibliography organized.
In undergrad my iPad was far and away my favorite note-taking device. Digital pen-and-"paper" beats laptop for 99% of note taking.
Ghostty is extremely performant. I had a bug in some concurrent software; when I added logging the bug would disappear because the threads acquiring the lock on STDERR was sufficient to make the bug go away. On Ghostty this happened fast enough that I was able to reproduce the bug.
Maybe I should have been writing everything to a file. ¯\_(ツ)_/¯ Anyway, I didn’t think of it at the time and Ghostty saved me.
It was running BC. I had high hopes for switching to CS because I'd heard the same thing you had, but when I tried it, HN slowed to a crawl. This stuff is so unpredictable.
Arc uses mutable cons, but Racket has immutable cons. So it's a problem.
In Racket BC mutable and immutable cons use the same struct at the C level, so both are quite fast and almost interchangeable, if you cross your fingers that the optimization passes don't notice the mess and get annoyed (somewhat like UB in C).
In Racket CS immutable cons are implemented as cons in Chez Scheme, but mutable cons are implemented as records in Chez Scheme, so they are not interchangeable at all.
Arc used a magic unsafe trick to mutate immutable(at the Racket level) cons that are actually mutable(at the Chez Scheme level) cons. The trick is slow because the Racket to Chez Scheme "transpiler" doesn't understand it and does not generate nice fast code.
One solution is to rewrite Arc to use mutable cons in Racket, but they are slow too because they are records in Chez Scheme that have less magic than cons in Chez Scheme. So my guess it that it will be a lot of work and little speed gain.
[Also, ¿kogir? asked a long time ago in the email list about how to use more memory in Racket BC, or how to use it better or something like that. I think he made a small patch for HN because it has some unusual requirements. Anyway, I'm not sure if it was still in use.]
---
The takeaway is that mutable cons are slow in Racket.
I mean… GitHub's uptime story has been getting worse…
I hear you and you're right that Codeberg has some struggles. If anyone needs to host critical infra, you're better off self-hosting a Forgejo instance. For personal stuff? Codeberg is more than good enough.
For the doubters replying here, Codeberg really is on average faster than GitHub. It's great. Objective measurements here: https://forgeperf.org/
Codeberg does suffer from the occasional DDOS attack—it doesn't have the resources that GH has to mitigate these sorts of things. Also, if you're across the pond, then latency can be a bit of an issue. That said, the pages are lighter weight, and on stable but low-bandwith connections, Codeberg loads really quickly and all the regular operations are supper zippy.
What I think after looking at these numbers is that we need to take the nuclear option - a native (no web stack at all) code review client. Seconds (times 100 or so for one larger review) are not in any way an acceptable order of magnitude to discuss performance of a front-end for editing tens of kilobytes of text. And the slow, annoying click orgy to fold out more common code, a misfeature needed just to work around loading syntax-highlighted text being insanely slow. Git is very fast, text editing is very fast, bullshit frameworks are slow.
I don't think that it would take great contortions to implement a HTML + JS frontend that's an order of magnitude faster than the current crapola, but in practice it... just doesn't seem to happen.
reply