The quality of the coffee depends on the technique used (and who does it). Yes, most decaf coffees suck, but there are some very good you can find. For example, Arpeggio [1] is for me and few people I know the best of all Nespreso coffees. In specialized coffee bars you can get amazing decafs.
The main problem seems to be tracking pixel itself to deduce involvement. The suggested approach to send email to confirm email seem better, unless it contains link to login page (as it can be phished). So, the best seems to be that one should send email that explains user how to confirm e-mail by logging manually to the app.
Not OP and this is mere anecdata, but on a modest several-years-old ThinkPad, Zotero was slow when my single collection started pushing over 1,000 papers, most of which had PDFs attached. Starting up would take many seconds (half a minute?) and heavy operations such as bulk-renaming would take minutes. But for day-to-day use (adding references to my collection via a browser plugin) it was fine.
Personally, I used auto-export for all additional functionality. So, I didn't use any Word (LibreOffice) plugins that hooked into Zotero or whatever. I'd just consume a giant .bib file as and when necessary.
On modern hardware Zotero is probably fine. And it's reasonably flexible. A suggestion: export/import a big refs file (plus PDF attachments) and see if it can handle your daily workload. I suspect it will.
Is there any benefit of this tool over opening docs in Windows Sandbox/VM with disabled network? Conversion can be easily done with a simple tool that screenshots each page within the sandbox (could be done for example with few lines of AHK script).
Anybody with experience in using duckdb to quickly select page of filtered transactions from the single table having a couple of billions of records and let's say 30 columns where each can be filtered using simple WHERE clausule? Lets say 10 years of payment order data. I am wondering since this is not analytical scenario.
Doing that in postgres takes some time, and even simple count(*) takes a lot of time (with all columns indexed)
I've used duckdb a lot at this scale, and I would not expect something like this to take more than a few seconds, if that. The only slow duckdb queries I have encountered either involve complex joins or glob across many files.
I'm not so sure the common index algorithms would work to speed up a count. How often is the table updated? If it's often, and it's also queried often for the count, then run the count somewhat often on a schedule and store the result separately, and if it isn't queried often, do it more seldom.
From what you describe I'd expect a list of column-value pairs under a WHERE to resolve pretty fast if it uses indices and don't fish out large amounts of data at once.