Indeed. I try not to use the word "native" these days as it has such ambiguous meaning. I also have thought for a while that Windows no longer has native UI, only legacy (Win32) and a rotating carousel of mostly-failed attempts. There have been a few HN stories in the last week that bear me out, notably [1]. Mac of course is in better shape, as AppKit and SwiftUI are both viable (and interop well enough).
I found this article about as compelling as all the other attempts at identifying him. Half of the cypherpunks (I was pretty active) had the same set of interests in public key cryptography, libertarianism, anonymity, criticism of copyright, and predecessor systems like Chaum's ecash; we talked about those in virtually every meeting.
The most compelling evidence is Adam Back's body language, as subjectively observed by a reporter who is clearly in love with his own story. The stylometry also struck me as a form of p-hacking—keep re-rolling the methodology until you get the answer you want.
It's entirely possible Adam is Satoshi, but in my opinion this article moves us no closer to knowing whether that's true or not. He's been on everybody's top 5 list for years, and this article provides no actual evidence that hasn't been seen before.
This is a perfectly reasonable question, and I think there are two aspects to it.
First, one of the research questions tested by Xilem is whether it is practical to write UI in Rust. It's plausible that scripting languages do end up being better, but we don't really know that until we've explored the question more deeply. And there are other very interesting explorations of this question, including Dioxus and Leptos.
Second, even if scripting languages do turn out to be better at expressing UI design and interaction (something I find plausible, though not yet settled), it's very compelling to have a high performance UI engine under a scriptable layer. I've done some experiments with Python bindings, and I think in an alternate universe you'd have apps like ComfyUI as high performance desktop app rather than a web page. Also, the layering of Xilem as the reactive layer, backed by Masonry as the widgets, is explicitly designed to be amenable to scripting, though to my knowledge there hasn't been a lot of actual work on this.
There is an Svg widget. It only supports static images, not animations, though this is certainly something I'm interested in.
It does support the modern 2D imaging model. It is in transition from using "Vello GPU" (aka Vello Classic) to the understory imaging abstraction, which means it can use any competent 2D renderer, including Skia.
My understanding, which is to be taken with a grain of salt, is that there's an additional constraint, not stated in the Scientific American article, that the plane curve be irreducible. The example of x^4 is reducible, it's x^2 * x^2 among other thing. The actual conjecture is expressed in terms of genus, but this follows from the genus-degree formula.
The reason for the confusion is that a smooth, projective plane curve of degree d has genus (d-1)(d-2)/2, which is 2 or greater starting at d=4. Hence the phrasing in the article, which is missing the “smooth, projective” hypothesis. The equation y = x^4 doesn’t define a smooth curve when extended to the projective plane, because it has a singularity at infinity.
Thanks btw for saying clearly that BIO is not suitable for DVI output. I was curious about this and was planning to ask on social media.
I've done some fun stuff in PIO, in particular the NRZI bit stuffing for USB (12Mbps max). That's stretching it to its limit. Clearly there will be things for which BIO is much better.
I suspect that a variant of BIO could probably do DVI by optimizing for that specific use case (in particular, configuring shifters on the output FIFO), but I'm not sure it's worth the lift.
USB 12Mbps is one of the envisioned core use cases - the Baochip doesn't have a host USB interface, so being able to emulate a full-speed USB host with a BIO core opens the possibility of things like having a keyboard that you can plug into the device. CAN is another big use case, once there is a CAN bus emulator there's a bunch of things you can do. Another one is 10/100Mbit ethernet - it's not fast - but good for extremely long runs (think repeaters for lighting protocols across building-scale deployments).
When considering the space of possibilities, I focused on applications that I could see there being actual product sold that rely upon the feature. The problem with DVI is that while it's a super-clever demo, I don't see volume products going to market relying upon that feature. The moment you connect to an external monitor, you're going to want an external DRAM chip to run the sorts of applications that effectively utilize all those pixels. I could be wrong and mis-judged the utility of the demo but if you do the analysis on the bandwidth and RAM available in the Baochip, I feel that you could do a retro-gaming emulator with the chip, but you wouldn't, for example, be replacing a video kiosk with the chip. Running DOOM on a TV would be cool, but also, you're not going to sell a video game kit that just runs DOOM and nothing else.
The good news is there's plenty of room to improve the performance of the BIO. If adoption is robust for the core, I can make the argument to the company that's paying for the tape-outs to give me actual back-end resources and I can upgrade the cores to something more capable, while improving the DMA bandwidth, allowing us to chase higher system frequencies. But realistically, I don't see us ever reaching a point where, for example, we're bit-banging USB high speed at 480Mbps - if not simply because the I/Os aren't full-swing 3.3V at that point in time.
My feeling about programmable IOs is they’re fun, but not the right choice for commodity high speed interfaces like USB. You obviously can make them work, but they’re large compared to what you would need for a dedicated unit. The DVI over PIO is a good example: showed something interesting (and that’s great!) but not widely useful. Also, a lot of protocols, even slow ones, have failure and edge cases that would need to be covered. Not to mention the physical characteristics, like you’ve said for high speed USB.
This is true, but only relevant if you order enough units (>100 k? Depending on price & margin of course) to customize your die. Otherwise, you have to find a chip with the I/Os that you want, all the rest being equal. Good luck with that if you need something specific (8 UARTs for instance) or obscure.
Yes, I can see BIO being really good at USB host. With 4k of SRAM I can see it doing a lot more of the protocol than just NRZI; easily CRC and the 1kHz SOF heartbeat, and I wouldn't be surprised if it could even do higher level things like enumeration.
You may be right about not much scope for DVI in volume products. I should be clear I'm just playing with RP2350 because it's fun. But the limitation you describe really has more to do with the architectural decision to use a framebuffer. I'm interested in how much rendering you can get done racing the beam, and have come to the conclusion it's quite a lot. It certainly includes proportional fonts, tiles'n'sprites, and 4bpp image decompression (I've got a blog post in the queue). Retro emulators are a sweet spot for sure (mostly because their VRAM fits neatly in on-chip SRAM), but I can imagine doing a kiosk.
Definitely agree that bit-banging USB at 480Mbps makes no sense, a purpose-built PHY is the way to go.
The clearance for AC8646 to land on runway 4 is given in a sequence starting at 4:58. "Vehicle needs to cross the runway" at 6:43. Truck 1 and company asks for clearance to cross 4 at 6:53. Clearance is granted at 7:00. Then ATC asks both a Frontier and Truck 1 to stop, voice is hurried and it's confusing.
Funny enough, the author of this blog post wrote another one on exactly that topic, entitled "What do executives do, anyway?"[1]. If you read it, you'll find it's written from quite an interesting perspective, not quite "fly on the wall," but perhaps as close as you're going to get in a realistic scenario.
Unfortunately, your complex script shaping for Arabic and Devanagari is wrong. The Arabic is missing the joining (all forms are isolated), and the Devanagari doesn't have the vowels combining (so you see those dotted circles).
To fix this you'll need Harfbuzz or something similar. Taking a quick look at the code, it seems like you're just doing a glyph at a time through the cmap. That, uh, won't do.
As the person who implemented GSUB support for Arabic in Prince (via the Allsorts Rust crate), this post highly intrigued me… especially because I wanted to see how they implemented GSUB for Opentype while being a film director and possibly stunt double on the side.
After seeing your comment, I’m saddened to see that OP and their comments in this threat are just bots.
You are completely right on all fronts. Thank you for taking a look at the code!
You hit the exact architectural bottleneck. Right now, the engine uses Intl.Segmenter to find the grapheme boundaries, but then it just does a direct cmap lookup to get the advance widths. It currently lacks a parser for the OpenType GSUB (Glyph Substitution) and GPOS (Glyph Positioning) tables, which is why Arabic defaults to isolated forms and Indic matras don't fuse.
The standard advice is exactly what you suggested: "just drop in HarfBuzz." But that creates an existential problem for this specific project. HarfBuzz is a massive C++ library. To run it in an Edge worker or pure V8 environment, I'd have to ship a WebAssembly binary that is often upwards of 1MB. That entirely defeats the purpose of building an 88 KiB, pure-JS, zero-dependency layout VM.
Doing complex text layout (CTL) and shaping purely in JavaScript without exploding the bundle size is essentially the final boss of this project. The roadmap is to either implement a highly tree-shakeable, pure-JS parser for the most critical GSUB/GPOS rules, or find a way to pre-compile shaping instructions.
For right now, it's a known trade-off: lightning-fast, edge-native pure JS layout, at the cost of failing on complex cursive ligatures. If you know of any micro-footprint pure-JS shaping libraries that don't rely on WASM, I am all ears!
And what's the point of being right when it's slow and bloated? Come on, it works for a lot of use cases, and it doesn't work for some. And it's still evolving.
All your comments here appear to be run through an ai engine, while you might think it makes you sounds better if English isn’t your native tongue, it just comes off as insincere, I’d rather read your bad grammar than feel like I’m communicating with a clanker.
[1]: https://news.ycombinator.com/item?id=47651703
reply