Hacker Newsnew | past | comments | ask | show | jobs | submit | pdhborges's commentslogin

It's in the file metadata:

- LuaTeX-1.17.0

- LaTeX via pandoc


Yup. The only major changes here are fonts and twocolumn. https://gist.github.com/aphyr/6f0cd6910ccfe2cd7828d1ade2eac5...

I don't even understand what approach 3 is doing. They ended up hashing the random part of the API key with an hash function that produces a small hash and stored that in the metashard server is that it?

yea... sorry I still am not the best explainer but that is the approach, I just wanted to have a shorter hash in the meta shard that is it. The approach 3 is an attempt by me to generate my own base62/base70 encoder ;-;

Apple's accidental moat now is taking the rise of hardware prices due to AI eat into their margins and just expand the mac user base.

Do you know why it is a toy? Because in a real prod environment after inserting 240k rows per second for a while you have to deal with the fact that schema evolution is required. Good luck migrating those huge tables with Sqlite ALTER table implementation

Try doing that on a “real” DB with hundreds of millions of rows too. Anything more than adding a column is a massive risk, especially once you’ve started sharding.

Yes it might be risky. But most schema evolution changes can be done with no or minimal downtime even if you have to do then in multiple steps. When is a simple ALTER going to be totally unacetable if youare using Sqlite?

This doesn't seem like a toy but you know... realizing different systems will have different constraints.

Not everyone needs monopolistic tech to do their work. There's probably less than 10,000 companies on earth that truly need to write 240k rows/second. For everyone else, we can focus on better things.


> realizing different systems will have different constraints.

I realize that. There are a few comments already that present use cases where I can totally see using Sqlite as a good option.

> Not everyone needs monopolistic tech to do their work

We are talking about localhost Postgres vs SQLite here. Both are open source.


Gets proper backups if you back it up the right way https://sqlite.org/backup.html

I bet that takes more time than the 5 extra minutes you take to setup Postgres in the same box upfront.

To export a database? Probably even faster. And that's ignoring the difference in performance.

So you are migrating from Sqlite to Postgres because you need it. What is the state of your product when you need to do this migration? Is your product non trivial? Are you now dependent on particular performance characteristics of Sqlite? Do you now need to keep your service running 24/7? Accounting for all of that takes way more than 5 minutes. The only way to beat that is if you still have a toy product and you can just export the database and import it and pray that it all works as a migration strategy.

What do you recomend as reading material for someone that was in college a while ago (before AE modes got popular) to get up to speed with the new PQ developments?


If you want something book-shaped, the 2nd edition of Serious Cryptography is updated to when the NIST standards were near-final drafts, and has a nice chapter on post-quantum cryptography.

If you want something that includes details on how they were deployed, I'm afraid that's all very recent and I don't have good references.


> The other side of this is building safety nets. Takes ~10min to revert a bad deploy.

Does it? Reverting a bad deploy is not only about running the previous version.

Did you mess up data? Did you take actions on third party services that that need to be reverted? Did it have legal reprecursions?


> Does it? Reverting a bad deploy is not only about running the previous version.

It does. We’ve tried. No it’s not as easy as running the previous version.

I have written about this: https://swizec.com/blog/why-software-only-moves-forward/


I read the article and to be honest I don't know where we disagree. I disagree with this quote,

> Takes ~10min to revert a bad deploy

A bad deploy can take way over that just in customer or partner management communication.


Having data model changes be a part of regular deployments would give me persistent heartburn.


It's why you always have a rollback plan. Every `up` needs to a `down`.


If you do that, it expands your test matrix quadratically.

So, it makes sense if you have infinite testing budgets.

Personally, I prefer exhaustively testing the upgrade path, and investing in reducing the time it takes to push out a hot fix. Chicken bits are also good.

I haven’t heard of any real world situations where supporting downgrades of persistent formats led to best of class product stability.

Would love to hear of an example.


Aircraft engineer: “That’s why you have parachutes.”

They might be an appropriate safeguard for a prototyping shop, but not for Delta.


I wouldn't say acceptance of crappy code. I think the issue is the acceptance of LLM plans with just a glance and the acceptance of code without any code review by the author at all because if the author would waste any more time it wouldn't be worth it anymore.


The problem is those plans become huge. Now I have to review a huge plan and the comparatively short code change.


It shouldn't be any longer than the actual code, just have it write "easy pseudocode" and it's still something that you can audit and have it translate into actual coding.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: