Finding vulnerabilities everywhere doesn't need any skills and more, nor Mythos.
See https://github.com/Swival/security-audits/ for examples, which are automated security audits just made with swival.dev /audit command, and includes audits of large code bases such as the entire OpenBSD base system.
`tokio`, and Rust `futures` in general, are perfectly fine for typical applications.
But as soon as you need something that doesn’t fit neatly into the abstractions they provide, even something as seemingly simple as proactively reusing or cancelling sessions, things quickly become extremely complicated, inefficient, and unreliable.
For high-performance servers, where you really care about raw performance, DoS resistance, and taking advantage of modern kernel features, these abstractions can become a major limitation.
It’s a bit like using an ORM that gives you no easy way to send raw SQL queries. It works fine for common cases, even if it’s not always optimal. But when you really want to take advantage of what the database can do, you usually avoid the ORM.
If you don’t want to use obsolete versions of dependencies, you need to explicitly tell the model that. Then you have to hope it can adopt new APIs it wasn’t trained on, rewrite existing code to handle the breaking changes, and keep your fingers crossed that nothing else breaks in the process.
LLMs perform much better with Go, not only because of the lack of hidden control flow (LLMs can deal with that, but it costs a lot of tokens) but mainly because both the language and its dependencies introduce very few breaking changes.
This hasn’t been true for some months. Claude has gotten better about adding latest versions of crates, and when it does encounter a breaking change from what it expects it is usually very quick about finding the change in the docs or crate source code.
What you are talking about used to be a pain point, but is now pretty much gone.
Rust can be a real superpower for AI-assisted dev work, because the compiler outputs very good errors, and the type system catches most safety bugs.
So Bun is going to become a fully vibe-coded codebase, with important details lost in translation.
I’ve been a huge supporter of Bun, but now I’d be extremely reluctant to deploy it in production.
It’s also a bit disappointing to see Jared change his mind so quickly. He’s an incredible developer with deep knowledge of how to write clean, maintainable, efficient code. But now it feels like his talent is being sidelined, and Claude has been given full control over the codebase.
Claude Code itself seems to be built that way: they keep piling on new features every day, but it has become this big, bloated Frankenstein slug.
Bun used to be a small, elegant, clean codebase. Now I’m worried it may turn into an unreliable mess.
Ironically, there are plenty of evals showing that it’s not actually that great. Even with Anthropic models, other harnesses are more efficient, both in terms of the number of problems solved and token usage.
Significant regressions also seem to be introduced from time to time after releases.
The UX is great, and if you need a kitchen sink packed with tons of features, even though you’ll probably only end up using a fraction of them, it’s fine.
But if you want something that performs well, you’re better off using something like Opencode or Swival.dev
I've used it to translate SQLite (with a few extensions) and, that I know of, it's been used (to varying degrees of success) to translate the MARISA trie library (C++), libghostty (Zig), zlib, Perl, and QuickJS.
More on-topic, I use a mix of an unevaluated expression stack and a stack-to-locals approach to translate Wasm.
Interesting. I started working on this same idea a couple of years ago as a way to bypass CGo. Eventually I moved on to something else. Glad someone else is working on this. How does the generated Go performance compare to the original WASM performance?
That's going to depend on what you mean for "original Wasm performance".
What were you using to run Wasm instead of this?
I can compare with wazero, which I was previously using, and say performance stayed mostly in the same ballpark. Things that crossed the Go-to-Wasm boundary very often became much faster, things that stayed mostly in Wasm became slightly slower, as the wazero compiler is pretty good.
wasm2go also does not support SIMD, so if your Wasm module uses/benefits from SIMD, you'll notice.
Go generates large Wasm modules, because it bundles its goroutine scheduler, garbage collector and standard library into the module.
Translating that back to Go will give you a pretty big Go file.
Go is "known" for being fast to compile, but that huge Go file will take (at least?) as long to compile as compiling the Go toolchain does.
wasm2go is best used on moderately sized modules (like SQLite). Last I heard, the person who tried to translate Perl got a 80MB Go file that was taking them 20min to compile.
See https://github.com/Swival/security-audits/ for examples, which are automated security audits just made with swival.dev /audit command, and includes audits of large code bases such as the entire OpenBSD base system.
reply