> The thing I most want to use this (or some other WASM Linux engine) for is running a coding agent against a virtual operating system directly in my browser.
Well, there it is, the dumbest thing I'll read on the internet all week.
Most of the engineering in Linux revolves around efficiently managing hardware interfaces to build up higher-level primitives, upon which your browser builds even higher-level primitives, that you want to use to simulate an x86 and attached devices, so you can start the process again? Somewhere (everywhere), hardware engineers are weeping. I'll bet you can't name a single advantage such a system would have over cloud hosting or a local Docker instance.
Even worse, you want this so your cloud-hosted imaginary friend can boil a medium-sized pond while taking the joyful bits of software development away from you, all for the enrichment of some of the most ethically-challenged members of the human race, and the fawning investors who keep tossing other people's capital at them? Our species has perhaps jumped the shark.
> while taking the joyful bits of software development away from you
Quick question: by "joyful bits of software development," do you mean the bit where you design robust architectures, services, and their communication/data concepts to solve specific problems, or the part where you have to assault a keyboard for extended periods of time _after_ all that interesting work so that it all actually does anything?
Because I sure know which of these has been "taken from me," and it's certainly not the joyful one.
I guess I enjoy solving problems, and recognize that the devil is always in the details, so I don't get much satisfaction until I see the whole stack working in concert. I never had much esteem for "architects" who sketch some blobs on the whiteboard and then disappear. I certainly wouldn't want to be "that guy" for anyone else, and I'm not even sure I could do it to an LLM.
> Well, there it is, the dumbest thing I'll read on the internet all week.
Rude.
In case you're open to learning, here's why I think this is useful.
The big lesson we've learned from Claude Code, Codex CLI et al over the past twelve months is that the most useful tool you can provide to an LLM is Bash.
Last year there was enormous buzz around MCP - Model Context Protocol. The idea was to provide a standard for wiring tools into LLMs, then thousands of such tools could bloom.
Claude Code demonstrated that a single tool - Bash - is actually much more interesting than dozens of specialized tools.
Want to edit files without rewriting the whole thing every time? Tell the agent to use sed or perl -e or python -c.
Look at the whole Skills idea. The way Skills work is you tell the LLM "if you need to create an Excel spreadsheet, go read this markdown file first and it will tell you how to run some extra scripts for Excel generation in the same folder". Example here: https://github.com/anthropics/skills/tree/main/skills/xlsx
That only works if you have a filesystem and Bash style tools for navigating it and reading and executing the files.
This is why I want Linux in WebAssembly. I'd like to be able to build LLM systems that can edit files, execute skills and generally do useful things without needing an entire locked down VM in cloud hosting somewhere just to run that application.
Here's an alternative swipe at this problem: Vercel have been reimplementing Bash and dozens of other common Unix tools in TypeScript purely to have an environment agents know how to use: https://github.com/vercel-labs/just-bash
I'd rather run a 10MB WASM bundle with a full existing Linux build in then reimplement it all in TypeScript, personally.
I agree, bash, sed, etc. are great, but a VM running inside a browser seems like the least efficient way to access them. Even if you're stuck on Windows, Cygwin has been a thing for 30 years now, and WSL for ten or so? There should be plenty of ways to set up a sandbox without having the simulate an entire machine.
It sounds like what you're really trying to recreate is the Software Tools movement from 50 years ago, where there was a push to port the UNIX/BTL utilities to the widest possible variety of systems to establish a common programming and data manipulation environment. It was arguably successful in getting good ports available just about anywhere, evolving into GNU, etc., but it never really reached its apotheosis. That style of clear, easy-to-read-and-write software was still largely killed off by a few big industry players pushing a narrative that "enterprise" has to mean relational databases and distributed objects. It would be FASCINATING if AI coding agents are the force that brings it back.
This isn't meant to be a daily driver. I'd like the option to build systems that occasionally run filesystem agent loops on an ad-hoc basis, for any user. A browser is a really good platform for that.
So are Cygwin and WSL, though, for those who don't already have the luxury of being on Linux or UNIX (incl. MacOS). I'm sure there are uses for running full-system emulators inside a browser, but access to bash and sed and gawk doesn't seem like one of them. Seriously, if that's the best way to get access to good text manipulation tools, why aren't you ditching your entire OS?
Because bash and sed and suchlike turn out to be the most useful tools for unlocking the abilities of AI agents to do interesting things - more so than previous attempts like
MCP.
> Linux RISC-V virtual machine, powered by the Cartesi Machine emulator, running in the browser via WebAssembly
> a single 32MiB WebAssembly file containing the emulator, the kernel and Alpine Linux operating system. Networking supports HTTP/HTTPS requests, but is subject to CORS restrictions
My demo here loads 12.7MB (if you watch the browser network panel) to get to a usable Linux machine, it even has Lua! https://tools.simonwillison.net/v86
But Docker is free (unless you're a fairly large business, in which case containerd is still free, and you can either pay for the front-end license or figure out how to set up one of the free alternatives), and from what perspective are the isolations available for the containerd process inferior to those available for your browser process? The former was at least designed from the ground up with security, auditing, quotas etc. in mind, and offers better per-container granular control than your browser offers per-tab.
I would argue the exact opposite: Linux is great, but it wasn't really designed with a focus on containing hostile software, and while containers have come to be a decent security barrier, they're still one kernel bug away from compromise. On the other hand, the browser is very accustomed to being the most exposed security-sensitive software on a machine, and modern browsers and wasm in particular are designed against that threat. Heck, wasm is so good for security that Mozilla started compiling components to wasm and then back into native code to get memory safety ( https://hacks.mozilla.org/2020/02/securing-firefox-with-weba... ).
The problem is that C++ stores the vtable inside the object, and the objects over which you're iterating often weren't allocated contiguously. Even when they are, if each object contains lots of other data, the vtables won't necessarily be close to each other. That means that invoking virtual functions inside a loop means a lot of cache misses, and since the data you're fetching will be a branch target, it's often hard to find other useful work to accomplish during the memory delay cycles. However, in a language where you can store a relatively tight array of object IDs (or even use tag bits in the this pointer), now you have a much higher cache hit rate on the indexes to your equally tight dispatch table, which will also have a high hit rate.
It's a fair amount of extra work, but in a hot loop it's sometimes worth it. "You can often solve correctness problems (tricky corner cases) by adding an extra layer of indirection. You can solve any performance problem by removing a layer of indirection."
And how does one become a maintainer, if there's no way to contribute from outside? Even if there's some extensive "application process", what is the motivation for a relatively new user to go through that, and how do they prove themselves worthy without something very much like a PR process? Are we going to just replace PRs with a maze of countless project forks, and you think that will somehow be better, for either users or developers?
If I wanted to put up with software where every time I encounter a bug, I either have no way at all to report it, or perhaps a "reporting" channel but little likelihood of convincing the developers that this thing that matters to me is worthy of attention among all of their competing priorities, then I might as well just use Microsoft products. And frankly, I'd rather run my genitals though an electric cheese grater.
You get in contact with the current maintainers and talk to them. Real human communication is the only shibboleth that will survive the AI winter. Those soft skills muscles are about to get a workout. Tell them about what you use the software for and what kinds of improvements you want to make and how involved you'd like your role to be. Then you'll either be invited to open PRs as a well-known contributor or become a candidate for maintainership.
Github issues/prs are effectively a public forum for a software project where the maintainers play moderator and that forum is now overrun with trolls and bots filling it with spam. Closing up that means of contributing is going to be the rational response for a lot of projects. Even more will be shunted to semi-private communities like Discord/Matrix/IRC/Email lists.
If the goal is truly "clarity", then I fail to see how this leads to more readable programs than Knuth's Web (or cweb for a more practical implementation).
If you really mean "C/C++, but with the sharp pointy bits filed down," then I fail to see what it adds over MISRA.
The bottom line is that we've had "clarity-first" languages for decades; the reason they're not more widely used is not simply that no one has tried this, nor that programmers appreciate murky code.
And citizens benefit from the taxes paid by non-citizen immigrants, whether documented or undocumented. Not just income and payroll taxes that might be dodged by under-the-table arrangements, but sales taxes, property taxes (perhaps paid indirectly via rent to a taxpaying landlord), the consumer share (nearly 100%) of tariffs, etc. And much of that tax base is spent on benefits and services that are not accessible to taxpaying non-citizens.
So from that standpoint, immigrants are a /better/ economic deal for the public than children are. At the end of the day, though, it shouldn't matter where people were born if they're contributing to society, and the grandparent post is 100% correct that the whole debate is stupid.
Oh, in that case no w-2 employee pays income taxes, their employer does. I guess we’re all just mooches on society and only the company owners do anything.
No, they just pay sales tax and other taxes on use. I was being sarcastic because you are fundamentally incorrect and as the other comment said, engaging in sophistry.
The "founding engineers" behind Facebook and Twitter probably didn't set out to destroy civil discourse and democracy, yet here we are.
Anyway, "full control over your keys" isn't the issue, it's the way that normalization of this kind of attestation will enable corporations and governments to infringe on traditional freedoms and privacy. People in an autocratic state "have full control over" their identity papers, too.
That's not a great example, since memcpy() already has all the information it needs to determine whether the regions overlap (src < dest + len && dst < src + len) and even where and by how much. So pretty much any quality implementation is already performing this test and selecting an appropriate code path, at the cost of a single branch rather than 8x as many memory operations.
The real purpose of restrict is to allow the compiler to cache a value that may be used many times in a loop (in memcpy(), each byte/word is used only once) in a register at the start of the loop, and not have to worry about repeatedly reaching back to memory to re-retrieve it because it might have been modified as a side effect of the loop body.
>> was it the unions or executives that decided to offshore manufacturing?
>Neither. It was consumers, who prefer lower prices.
Right, because every executive who pursued offshore manufacturing was thinking, "gosh, how can I deliver even lower prices and better value to my customers?", and not "OK, we've shown the market will pay $X for product Y, how can I cut my costs and free up more money for bonuses and cocaine?"
Graphs of price indices (aside from a few sectors such as electronics where it was the core technology that improved, not labor efficiency) and wages over the last 50 years clearly show that the bulk of any offshoring savings were not passed along to consumers or front-line workers.
> Right, because every executive who pursued offshore manufacturing was thinking, "gosh, how can I deliver even lower prices and better value to my customers?", and not "OK, we've shown the market will pay $X for product Y, how can I cut my costs and free up more money for bonuses and cocaine?"
Suppose it used to cost you $1 to make something in the US that you had been selling for $1.50. The cost of domestic real estate and other things goes up, so now to make it in the US it costs you $1.60. If you sell for $1.50 you're losing money and to have your previous gross margin percentage you'd have to sell for $2.40. Meanwhile it still costs $1 to make it in China and one of your competitors is doing that and still selling it for $1.50.
Your options are a) don't raise prices, lose $0.10/unit and go out of business, b) raise prices, lose all of your sales to the company who still charges the old price and go out of business, or c) offshore manufacturing.
The only way out of this is to get the domestic costs back down, which is a matter of changing the government policy over zoning rules etc.
> Graphs of price indices (aside from a few sectors such as electronics where it was the core technology that improved, not labor efficiency) and wages over the last 50 years clearly show that the bulk of any offshoring savings were not passed along to consumers or front-line workers.
Are these graphs being adjusted for the increasing cost of things like domestic real estate having to be incorporated into the prices of everything? Even if you make it in another country you still have to pay for a domestic warehouse or retail store.
Well, there it is, the dumbest thing I'll read on the internet all week.
Most of the engineering in Linux revolves around efficiently managing hardware interfaces to build up higher-level primitives, upon which your browser builds even higher-level primitives, that you want to use to simulate an x86 and attached devices, so you can start the process again? Somewhere (everywhere), hardware engineers are weeping. I'll bet you can't name a single advantage such a system would have over cloud hosting or a local Docker instance.
Even worse, you want this so your cloud-hosted imaginary friend can boil a medium-sized pond while taking the joyful bits of software development away from you, all for the enrichment of some of the most ethically-challenged members of the human race, and the fawning investors who keep tossing other people's capital at them? Our species has perhaps jumped the shark.