I'm really not wanting to come across inflammatory, but I don't think we should shy away from pointing out cases of over-dependence on AI if we see it. These models can be dangerous.
The 2.7KB Zig WASM binary is the scoring engine that runs on every request at Cloudflare's edge. The globe visualizes where those requests land. Two layers — compute at the edge, visualization in the browser.
MCPaaS serves persistent AI context via the Model Context Protocol. A namepoint (mcpaas.live/yourhandle) gives your AI instant project context — no re-explaining every session. Works with Claude, Gemini, Cursor, any MCP client.
Exactly right. 2.7KB works because it's pure computation — slot counting, no allocator, no stdlib, no WASI. The moment you need I/O it balloons. This use case fits a glove
Fair point — globe.gl (Three.js) handles the 3D rendering client-side.
The 2.7KB WASM is the server-side scoring engine — Zig-compiled, runs on every
request at the Cloudflare edge. The globe visualizes where those executions happen.
Two separate layers: WASM at the edge, JS in the browser.
reply