Hacker Newsnew | past | comments | ask | show | jobs | submit | d1l's commentslogin

Read Moby Dick some time my friend.

The industrial revolution is generally understood to have started somewhere around 1760, Moby Dick took place in approximately 1830, about 10 years before what some historians mark as the end of the agrarian to Industrial shift that is generally termed the Industrial revolution

https://en.wikipedia.org/wiki/Industrial_Revolution

I get sort of wishy-washy from 1830 on, because lots of people put the end of the Industrial revolution as being 1900, but 1840 is a defensible and commonly held position.


> The industrial revolution is generally understood to have started somewhere around 1760,

In Britain. Moby Dick ain't set in Britain.


That’s besides the point because most whales were killed in the XX century.

100% bro


It's a nonstop slop funnel as far as I can tell. Only ashamed I've been here for more than 5 minutes.


Are you using ai for the comment replies too?!


Everyone knows the emdash is a giveaway, and they are being left in


Vibe coded trash.

that credulous hn readers will upvote this is alarming.


What's the evidence that this is vibe-coded? Or trash?


If you can't tell then I'm not sure what more needs to be said. I took a look through the commit history and it was glaringly obvious to me.

To trust something like data-storage to vibe-coded nonsense is incredibly irresponsible. To promote it is even moreso. I'm just surprised you can't tell, too.


the readme seems like it was written to some degree by claude. if u work long enough with claude u start to pick up on its style/patterns


100%


I don't know about trash, but this post, this repo and even their comments on this thread are blatantly written by an AI. If you still need to ask for evidence, consider that you might be AI-blind.


this is not vibe coded project, this is developed by understanding sqlite code. Have you ever looked into examples ? Have you checked the code ? Now my post got flagged. and If I use AI to understand code than what is wrong with that ? what is the use of AI ? to make person more productive, right ?


You've been copying and pasting directly from Claude to reply to comments that ask how this works. You also realise you've been caught and are now replying in a completely different style.

You've thrown away all credibility.


I believe it was flagged for spamming, not for "vibe code"


I am showing examples/ showing demo gif https://github.com/hash-anu/snkv/blob/master/demo.gif. Showing code.


Beethovens 17th piano sonata is based on the tempest. I recommend Richter if you’re curious!

https://youtu.be/qeL3tAb7yV4


I’d imagine there’s an extremely long tail of features and quirks that will take time to iron out even after SQL compatibility is achieved. Looks like it’s still missing some important features like savepoints (!!!), windows and attach database.

I’d be more excited and imagine it would be more marketable if it focused instead on being simply an embedded sql db that allowed multiple writers (for example), or some other use case where SQLite falls short. DuckDB is an example- SQLite but for olap.


There is. For example, four months ago [0] they've accidentally stumbled upon about an explicitly documented quirk of SQLite file format.

[0] https://news.ycombinator.com/item?id=45101854


I stumbled on the lock page myself when I was experimenting with writing a sqlite vfs. It's been years since I abandoned the project so I don't recall much including why I was using the sqlitePager but I do recall the lockpage being one of the first things I found where I needed to skip sending page 262145 w/ 4096 byte pages to the pager when attempting to write the metadata for a 1TB database.

I'm surprised they didn't have any extreme tests with a lot of data that would've found this earlier. Though achieving the reliability and test coverage of sqlite is a tough task. Does make the beta label very appropriate.


I think that async support for multi-read(and write) are part of the reason for the separate Turso library in Rust over the C fork (libSQL). I also wouldn't be surprised if baking in better support for replication was a design goal as well. Being file-format compatible with SQLite is really useful as well.

In the end, the company is a distributed DBaaS provider using a SQLite interface to do so... this furthers that goal... being SQLite compatible in terms of final file structure just eases backup/recovery and duplication options.

I think being able to self-host either in an open/free and commercial/paid setting is also going to be important to potential users and customers... I'm not going to comment on the marketing spin, as it is definitely that.


You're only file format compatible if you don't use any of the Turso extensions.

Just like STRICT tables, as soon as you use an unsupported feature in your schema, your database becomes incompatible.

With STRICT tables you needed to upgrade SQLite tools.

But if you use something from Turso you're placing yourself outside the ecosystem: the SQLite CLI no longer works, Litestream doesn't work, sqlite_rsync doesn't work, recovery tools don't work, SQLite UIs don't work.

Turso has no qualms with splitting the ecosystem. They consider themselves the next evolution of SQLite. The question is do you want to be part of it?


Maybe. I don't think having parallel divergence is inherently bad. DuckDB doesn't replace SQLite...

But depending on the need, I'm probably more inclined to reach for something like Turso today than Firebird of I want something I can embed and connect to a server to sync against, for example.

Like most things, it depends.


IIRC, multiple writers in SQLite is supported, their writes will just be serialized. What you don't have is concurrent writes. But, given SQLite writes are so fast, in practices it's not really a big deal.

If you haven't used SQLite in a real project with heavy writes, I'd say you do it. SQLite is WAY more powerful than people tend to think of it.


I've been using a sqlite alternative to avoid dependencies on a native library. It's go application that uses a native go sqlite reimplementation so i can create platform specific binaries that include all dependencies. Makes installation easier and more reliable.


modernc.org/sqlite is upstream SQLite, compiled to Go using ccgo. Spiritually similar to, say, a WASM build of SQLite. Not a separate reimplementation.


Aha, wasn't aware, good to know, thanks.


I’ve been using it locally and with their hosted offering for awhile now and it’s rock solid other than if I make super deeply nested joins which overflow something. But other that that it’s super fast and cheap I haven’t had to need more than the free tier with a bunch of stuff I host on cloudflare workers


Fuck, my wife got a notice that she would have to increase her iCloud storage so last week began the process of ordering a backup of all her pictures so I could get them off iCloud and organized on some drives at home. We got 12 zips of the pictures along with csv's and some metadata, and I literally just finished iterating on the script to sort them into year-based folders and convert all the HEIC shit into JPG. It's running literally right now.

Guess I should've searched harder!


I still use it (shout out taviso iykyk).

https://github.com/zy/zy-fvwm/blob/master/fvwmrc/taviso.fvwm...

Someone made a full cde style desktop with fvwm: https://github.com/NsCDE/NsCDE

It’s too bad tech seems so much to take away this kind of configurability in the name of “we know better”. There’s so much to be said for software that can last so long, as opposed to the constant treadmill of forced updates.

Fuck gnome eternally for destroying gtk and fuck Wayland.


This is disingenuous and probably was written this way for HN cred and clicks. Sqlite's test suite simulates just about every kind of failure you can imagine - this document is worth reading if you have any doubts: https://www.sqlite.org/atomiccommit.html


> Sqlite's test suite simulates just about every kind of failure you can imagine

The page you link even mentions scenarios they know about that do happen and that they still assume won't happen. So even sqlite doesn't make anywhere near as strong a claim as you make.

> SQLite assumes that the operating system will buffer writes and that a write request will return before data has actually been stored in the mass storage device. SQLite further assumes that write operations will be reordered by the operating system. For this reason, SQLite does a "flush" or "fsync" operation at key points. SQLite assumes that the flush or fsync will not return until all pending write operations for the file that is being flushed have completed. We are told that the flush and fsync primitives are broken on some versions of Windows and Linux. This is unfortunate. It opens SQLite up to the possibility of database corruption following a power loss in the middle of a commit. However, there is nothing that SQLite can do to test for or remedy the situation. SQLite assumes that the operating system that it is running on works as advertised. If that is not quite the case, well then hopefully you will not lose power too often.


There was a time that Oracle databases used raw disk partitions to minimize the influence of the OS in what happens between memory and storage. It was more for multiple instances looking at the same SCSI device (Oracle Parallel Server).

I don't think that is often done now.


> So even sqlite doesn't make anywhere near as strong a claim as you make.

And? If you write to a disk and later this disk is missing, you don't have durability. SQLite cannot automatically help you to commit your writes to a satellite for durability against species ending event on Earth, and hence its "durability" has limits exactly as spelled out by them.


You're arguing a strawman and I pointed at a specific example. Sticking with my specific example they could probe for this behavior or this OS version and crash immediately, telling the user to update their OS. Instead it seems they acknowledge this issue exists and they hope it doesn't happen. Which hey everybody does but that's not the claim OP was making.


It’s not really a libraries job to cover all bases like you’re suggesting. They outline the failure scenarios fairly well and users are expected to take note.


That document addresses atomicity, not durability, and is thus non-responsive to my concerns.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: