Hacker Newsnew | past | comments | ask | show | jobs | submit | jtrueb's commentslogin

RTOS can be used a lot looser than you describe. Like a build system, scheduling, and interrupt framework that allows you to program an MCU like you describe. Zephyr RTOS and Free RTOS provide easy enough ways to write code that uses blocking APIs but probably runs your code according to the timing constraints if you hold it right. As an alternative, you could write for “bare metal” and handle the control flow, scheduling, interrupting, etc. yourself. If you are writing to “random” addresses according to some datasheet to effect some real world change, you are probably reaching for an RTOS or bare metal unless you are writing OS driversn. If you look at the linux drivers, you will see a lot of similarities to the Zephyr RTOS drivers, but one of them is probably clocking in the MHz while the other in the GHz

I kinda quit using it. The tab feature is useful when making minor or mundane changes, but I quite prefer the codex GUI if I am going to be relatively hands off with agents.

Is there a new version or news related to this? v0.9 was Nov 2024, and Leptos and Dioxus have been a lot more active.

There has been a few minor releases since. I am planning on making a new release soon with a few bug fixes and working on new major features.

I'm also looking for new contributors and maintainers!


AI-assisted research is a solid A already. If you are doing greenfield then. The horizon is only blocked by the GUI required tooling. Even then, that is a small enough obstruction for most researchers.


Yes! I’m not sure how many people arguing for one or the other have tried both, but it is clear that you know the pain.


Have you been in a self driving car? There are some quite annoying hiccups, but they are already very safe. I would say safer than the average driver. Defensive driving is the norm. I can think of many times where the car has avoided other dangerous drivers or oblivious pedestrians before I realized why it was taking action.


That timestamp resolution discrepancy is going to cause so many problems


Do you mean the new default datetime resolution of microseconds instead of the previous nanosecond resolution? Obviously this will require adjustments to any code that requires ns resolution, but I'd bet that's a tiny minority of all pandas code ever written. Do you have a particular use case in mind for the problems this will cause?


I would describe it as the huge majority, reflecting on my pandas use over the years. Pretty much all of the data worth exploring in pandas over excel, some data gui, or polars involves timestamps.


Yeah, but is nanosecond-level resolution necessary? In many scenarios, a resolution of one second is adequate.


I don't need nanosecond accuracy. I just know there are a lot of scripts expecting it.


Heard of `#![forbid(unsafe_code)]` ?


It's that effectively enforcement of features that are banned?


Rust's compiler has six what it calls "lint levels" two of which can't be overridden by compiler flags, these handle all of its diagnostics and are also commonly used by linters like clippy.

Allow and Expect are levels where it's OK that a diagnostic happened, but with Expect if the diagnostic expected was not found then we get another diagnostic telling us that our expectation wasn't fulfilled.

Warn and Force-Warn are levels where we get a warning but compilation results in an executable anyway. Force-warn is a level where you can't tell the compiler not to emit this warning.

Deny and Forbid are levels where the diagnostic is reported and compilation fails, so we do not get an executable. Forbid, like Force-warn cannot be overriden with compiler flags.


#![deny(clippy::unwrap_used)]


I doubt unsafe would be blatantly banned. I was more thinking of things like glob imports.


simd was one I thought we needed. Then, i started benchmarking using iter with chunks and a nested if statement to check the chunk size. If it was necessary to do more, it was typically time to drop down to asm rather than worry about another layer in between the code and the machine.


This is the most surprising comment to me. It’s that bad? I haven’t benchmarked it myself.

Zig has @Vector. This is a builtin, so it gets resolved at comptime. Is the problem with Rust here too much abstraction?


I think you misinterpreted GP; he's saying that with some hints (explicit chunking with a branch on the chunk size), the compiler's auto-vectorization can handle the rest, inferring SIMD instructions in a manner that's 'good enough'.


The reason for not choosing Rust still doesn't make any sense to me. If you don’t want to OOM, need correctness, are following the power of ten (where you aren’t allocating anyways), I don’t see the conflict or harm of additional enforced correctness.

Also, Rust does support checked arithmetic and has stable toolchains.


Several types of correctness are more difficult to enforce in Rust than in some other languages due to its relatively weak compile-time facilities. Modern database engines don't allocate memory at runtime, aren't multithreaded, they explicitly schedule ownership, etc. They also use the types of data structures and patterns that give the borrow checker fits. Rust's correctness capabilities are relatively less valuable in this context.

Extensive compile-time capabilities that allow you to verify many other types of correctness add enormous value when building database engines. This is a weak part of Rust's story.


Can you elaborate a little more on what structures and patterns those are? I have built some database like things in rust (lots of zero copy and slice shenanigans, and heavily multi-threaded) and while it was tricky in some spots I haven't run into anything I have had to substantially compromise on.

Your use cases are likely more complex, so I'm curious what I should be looking out for.


My understanding from reading other blogs on TigerBeetle (and the Power of Ten rule) is that it's not that they aren't allocating at all. It's all static allocation up front. Zig makes these far easier to manage with its use of Allocators. Rust wants everything to be RAII, tons of little allocations who's lifetimes are managed by the borrow checker. You can use other patterns in Rust of course but you're fighting the borrow checker.

Zig gives you a lot of tools to enforce correctness in simple and straightforward ways, where as Rust comes with a lot of complexity. TigerBeetle isn't the first project to talk about this, Richard Feldman also points out similar advantages to Zig over Rust as the reasoning for the Roc compilers rewrite from Rust to Zig.


There's a change in the tradeoffs in the above scenario:

- you still get extra benefit from Rust, but the magnitude of the benefit is reduced (e.g., no UAF without F).

- you still get extra drawbacks from Rust, but the magnitude of drawbacks is increased, as Rust generally punishes you for not allocating (boxing is a common escape hatch to avoid complex lifetimes).

Just how much tradeoff is shifted is hard to qualify unambiguously, but, from my PoV (Rust since 2015, TB/Zig since late 2022), Zig was and is the right choice in this context.


I mainly use Rust in embedded now. I don’t always rely on encoding all of the correctness in the Rust type system. To a degree all the old ways of enforcing correctness are still in play, I am just choosing when to take use idiomatic Rust or escape hatch out via shim to C-style Rust. It reminds me quite a bit of how C and C++ shops require another layer of macros or templates be used for containers, resources, etc.

The build time of Zig seems like the most desirable piece worth deciding over. Developer time is money, but it isn’t weird to have multi-hour build times in a mature project either C, C++, or Rust. The correctness suite is a bigger time sink than the build though. When building a database, you could drive the build time to 0 and still have hours in CI.


Out of interest, did you read the two posts [0][1] linked in there by matklad, creator of rust-analyzer as well as IntelliJ Rust, on our team?

Suffice to say, we know the intrusive memory and comptime patterns we use in our code base, and they wouldn't be as natural to express in a language other than Zig.

Rust is a great language, but Zig made more sense for what I wanted to create in TigerBeetle.

[0] https://matklad.github.io/2023/03/26/zig-and-rust.html

[1] https://lobste.rs/s/uhtjdz/rust_vs_zig_reality_somewhat_frie...


Yeah, I think BDFL wants to use Zig. I understand that it is nice for Zig to feel more like C, and that can be fun. If the toolchain is so far away from being mature, how long will it take the database to be mature?

Since previous comment was edited. I would clarify that I don’t doubt the engineering capabilities, just the timeline. A from scratch database in _established_ toolchains take 5-10 years. The Zig toolchain also is going to be evolving in the same timeframe or longer. The codegen, linking, architecture specific bugs etc. Isn’t it double the effort to bring to bear in the market?


You're right! In the past, distributed databases written without Deterministic Simulation Testing (DST) would typically take around ~10 years (~5 years for the consensus protocol and ~5 years for the storage engine). And then not without bugs.

However, and we write about this also in the post, but TigerStyle and DST enabled us to ship TigerBeetle to production in 3.5 years, in less time, to a higher standard, and to be the first distributed database with an explicit storage fault model (Kyle Kingsbury added new storage fault injectors to Jepsen) solving Protocol-Aware Recovery.

Our Jepsen audit is here (also linked in the post): https://jepsen.io/analyses/tigerbeetle-0.16.11

For more on TigerStyle as a methodology, hope you enjoy this talk: https://www.youtube.com/watch?v=w3WYdYyjek4


I was on a team with a similar timeline with C++ (4 year). All the language and toolchain difficulties came after shipping. Meeting new customer needs meant shifting from greenfield to brownfield engineering. We were chasing down strange platform and provider behaviors. Adding features while maintaining performance and correctness, meant relying on knowledge of tools available in the broader community. Solutions for build issues came through a combination of in-house effort and industry partners with related experience. Having two stable compilers (gcc and clang) was super helpful.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: