Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Joran from TigerBeetle!

Without much sweat for general purpose workloads.

But transaction processing tends to have power law contention that kills SQL row locks (cf. Amdahl’s Law).

We put a contention calculator on our homepage to show the theoretical best case limits and they’re lower than one might think: https://tigerbeetle.com/#general-purpose-databases-have-an-o...



>Traditional SQL databases hold locks across the network; under Amdahl's Law, even modest contention caps write throughput at ≈100–1,000 TPS

In fact large real world systems are not limited to 100-1000 TPS, or even 10 kTPS as the calculator tries to suggest. That's not because Amdahl's law is wrong, the numbers you're plugging in are just wildly off, so the conclusions are equally nonsensical.

There might be some specific workloads where you saw those numbers, and your DB might be a good fit for this particular niche, but you shouldn't misrepresent general purpose workloads to try to prop up your DB. Claiming that SQL databases are limited to "100-1000 TPS" is unserious, it is not conductive to your cause.


They’re talking about 100-1000 TPS for transactions all locking a single row, which is not wrong, just not reflective of most workloads. They’re not talking about TPS of the entire database with simultaneous operations on many independent rows. This should be reasonably clear in context, but of course when you publish grandiose claims (when viewed in isolation) and very vague graphs backing said claims people won’t be happy.

TFA contextualizes this better:

> This gets even worse when you consider the problem of hot rows, where many transactions often need to touch the same set of “house accounts”.


This whole page, and their response in this thread, is about tigerbeetle as a transaction processing database - e.g. financial transaction processing

I think this is very clear, I don't know why you're saying that tigerbeetle is trying to make a generic claim about general workloads

The comment you're replying to explicitly states that this isn't true for general workloads


Its a financial database built for use cases where this invariant holds and built for enabling new use cases where this invariant prevented businesses from expanding into new industries. The creator says as much:

> Without much sweat for general purpose workloads.


Having a niche and expanding into new industries are all fine, there is no problem with having a DB filling a particular sub-segment of the market.

But writing that traditional SQL databases cannot go above these "100-1000 TPS" numbers due to Amdahl's law is going to raise some eyebrows.


He clearly says this is in the context of "transaction processing" in the comment you're responding to.


> But writing that traditional SQL databases cannot go above these "100-1000 TPS" numbers due to Amdahl's law is going to raise some eyebrows.

I don't think that's controversial. Amdahl's law applies to all software. Its not a peculiar feature of SQL databases. The comment is well-contextualized, in my view, but reasonable minds may disagree.


You’re changing the subject to the company’s mission when the concern was about a specific claim made.


A specific claim about OLTP processing under contention. Or how would you characterize the specific claim, specifically?


So, the pitch for TigerBeetle is... "you can do database schemas wrong and we're performant enough"? (and also we don't have auth)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: