I don't even understand what approach 3 is doing. They ended up hashing the random part of the API key with an hash function that produces a small hash and stored that in the metashard server is that it?
yea... sorry I still am not the best explainer but that is the approach, I just wanted to have a shorter hash in the meta shard that is it. The approach 3 is an attempt by me to generate my own base62/base70 encoder ;-;
Do you know why it is a toy? Because in a real prod environment after inserting 240k rows per second for a while you have to deal with the fact that schema evolution is required. Good luck migrating those huge tables with Sqlite ALTER table implementation
Try doing that on a “real” DB with hundreds of millions of rows too. Anything more than adding a column is a massive risk, especially once you’ve started sharding.
Yes it might be risky. But most schema evolution changes can be done with no or minimal downtime even if you have to do then in multiple steps. When is a simple ALTER going to be totally unacetable if youare using Sqlite?
This doesn't seem like a toy but you know... realizing different systems will have different constraints.
Not everyone needs monopolistic tech to do their work. There's probably less than 10,000 companies on earth that truly need to write 240k rows/second. For everyone else, we can focus on better things.
So you are migrating from Sqlite to Postgres because you need it. What is the state of your product when you need to do this migration? Is your product non trivial? Are you now dependent on particular performance characteristics of Sqlite? Do you now need to keep your service running 24/7? Accounting for all of that takes way more than 5 minutes. The only way to beat that is if you still have a toy product and you can just export the database and import it and pray that it all works as a migration strategy.
What do you recomend as reading material for someone that was in college a while ago (before AE modes got popular) to get up to speed with the new PQ developments?
If you want something book-shaped, the 2nd edition of Serious Cryptography is updated to when the NIST standards were near-final drafts, and has a nice chapter on post-quantum cryptography.
If you want something that includes details on how they were deployed, I'm afraid that's all very recent and I don't have good references.
If you do that, it expands your test matrix quadratically.
So, it makes sense if you have infinite testing budgets.
Personally, I prefer exhaustively testing the upgrade path, and investing in reducing the time it takes to push out a hot fix. Chicken bits are also good.
I haven’t heard of any real world situations where supporting downgrades of persistent formats led to best of class product stability.
I wouldn't say acceptance of crappy code. I think the issue is the acceptance of LLM plans with just a glance and the acceptance of code without any code review by the author at all because if the author would waste any more time it wouldn't be worth it anymore.
It shouldn't be any longer than the actual code, just have it write "easy pseudocode" and it's still something that you can audit and have it translate into actual coding.
- LuaTeX-1.17.0
- LaTeX via pandoc
reply