crashes happen for reasons besides memory safety. web-engines are crazy complicated pieces of software and crashes could happen for any number of reasons. also I would be shocked if this was written using purely safe rust
Actually, there are no inserts in this example each transaction in 2 updates with a logical transaction that can be rolled back (savepoint). So in raw terms you are talking 200k updates per second and 600k reads per second (as there's a 75%/25% read/write mix in that example). Also worth keeping in mind updates are slower than inserts.
> no indexes.
The tables have an index on the primary key with a billion rows. More indexes would add write amplification which would affect both databases negatively (likely PG more).
> Also, I didn't get why sqlite was allowed to do batching and pgsql was not.
Interactive transactions [1] are very hard to batch over a network. To get the same effect you'd have to limit PG to a single connection (deafeating the point of MVCC).
- [1] An interactive transaction is a transaction where you intermingle database queries and application logic (running on the application).
Not the person you are responding to, but sqlite is single threaded (even in multi process, you get one write transaction at a time).
So, if you have a network server that does BEGIN TRANSACTION (process 1000 requests) COMMIT (send 1000 acks to clients), with sqlite, your rollback rate from conflicts will be zero.
For PG with multiple clients, it’ll tend to 100% rollbacks if the transactions can conflict at all.
You could configure PG to only allow one network connection at a time, and get a similar effect, but then you’re paying for MVCC, and a bunch of other stuff that you don’t need.
Sqlite supports nested transactions with SAVEPOINT so each client can have their own logical transaction that can be rolled back. The outer transaction just batches the fsync effectively. So an individual client failing a transaction doesn't cause the batch to fail. But, a crash would cause the batch to fail. Because, it's a single writer, there's no rollback/retries from contention/MVCC.
You could try to imitate this in postgresql but the problem is the outer transaction does not eliminate the network hops for each inner/client transaction so you don't gain anything doing it and you still have the contention problem which will cause rollbacks/retries. You could reduce your number of connections to one to eliminate contention. But, then you are just playing sqlite's game.
An interactive transaction works like this in pseudo code.
beginTx
// query to get some data (network hop)
result = exec(query1)
// application code that needs to run in the application
safeResult = transformAndValidate(result)
// query to write the data (network hop)
exec(query2, safeResult)
endTx
How would you batch this in postgres and get any value? You can nest them all in a single transaction. But, because they are interactive transactions that doesn't reduce your number of network hops.
The only thing you can batch in postgres to avoid network hops is bulk inserts/updates.
But, the minute you have interactive transactions you cannot batch and gain anything when there is a network.
Your best bet is to not have an interactive transaction and port all of that application code to a stored procedure.
> How would you batch this in postgres and get any value? You can nest them all in a single transaction. But, because they are interactive transactions that doesn't reduce your number of network hops.
you can write it as stored procedure in your favorite language, or use domain socket where communication happens using through shared memory buffs without network involved.
In your post, I think big performance hit for postgres potentially comes from focus on update only statement, in SQlite updates likely happen in place, while postgress creates separate record on disk for each updated record, or maybe some other internal stuff going on.
Your benchmark is very simplistic, it is hard to tell what would be behavior of SQlite if you switch to inserts for example, or many writers which compete for the same record, or transaction would be longer. Industry built various benchmarks for this, tpc for example.
Also, if you want readers understand your posts better, you can consider using less exotic language in the future. Its hard to read what is and how is batched there.
The statement was that China was giving up on open weights, they didn't say anything about licensing. Licensing on these models has always been hit or miss depending on which lab and which release.
but context of the statement is discussion about corps do grab and rent strategy. My understanding is that referenced Chinese model can't be argument in this context, and there is no recent 200B+ params Chinese models with friendly license.
That license is more like business source license vs open source license.
it could be not blatant violation, but they more like don't track this on their side because don't think it is a big deal, so some individual can act like that.
Blatant violation would be if they do it on many cases and large scale.
> During the entire gulf war (Iraq, 1990-91), only two F-15s were shot down via surface-to-air engagement.
was it because F-15 was used as superiority fighter at that time and now they use it as heavy bomber? I assume plenty of bombers likely was shot down in Iraq.
per wiki, f-15e was first produced in 1987, so there were very few in service at that time, and most of ground strikes were carried by other aircrafts.
Yes, most ground strikes were by other aircraft types, but the F-15E did have a lot of sorties, almost as many as the F-111 or F-4G (although the F-16 had many, many more sorties, but not all of them were air-to-ground)
its common playbook for corporate self-development in NA.
reply