Hacker Newsnew | past | comments | ask | show | jobs | submit | majkinetor's commentslogin

Find better decaf. Caffeine has no taste.


Don't the beans need to be processed to remove the caffeine ?


The quality of the coffee depends on the technique used (and who does it). Yes, most decaf coffees suck, but there are some very good you can find. For example, Arpeggio [1] is for me and few people I know the best of all Nespreso coffees. In specialized coffee bars you can get amazing decafs.

[1]: https://www.nespresso.com/us/en/order/capsules/original/arpe...


Its not only overblown, its totally non-issue.

The main problem seems to be tracking pixel itself to deduce involvement. The suggested approach to send email to confirm email seem better, unless it contains link to login page (as it can be phished). So, the best seems to be that one should send email that explains user how to confirm e-mail by logging manually to the app.


I don't think it matters at all.

HTTPS doesn't encrypt query parameters. Content of the image itself is irrelevant, as its only purpose is to get request URL into the server logs.


HTTPS does encrypt query parameters, all of the HTTP request goes inside the encrypted session.

The only thing outside is the hostname, if the connection is not using the latest versions


That must be the worst repo I have ever seen.


Zotero is very good nowadays. Its unbearably slow though, and it doesn't seem this will ever get better.


Could you share how/when it is slow? We’re considering using this at work and I’d love your feedback.


Not OP and this is mere anecdata, but on a modest several-years-old ThinkPad, Zotero was slow when my single collection started pushing over 1,000 papers, most of which had PDFs attached. Starting up would take many seconds (half a minute?) and heavy operations such as bulk-renaming would take minutes. But for day-to-day use (adding references to my collection via a browser plugin) it was fine.

Personally, I used auto-export for all additional functionality. So, I didn't use any Word (LibreOffice) plugins that hooked into Zotero or whatever. I'd just consume a giant .bib file as and when necessary.

On modern hardware Zotero is probably fine. And it's reasonably flexible. A suggestion: export/import a big refs file (plus PDF attachments) and see if it can handle your daily workload. I suspect it will.


Is there any benefit of this tool over opening docs in Windows Sandbox/VM with disabled network? Conversion can be easily done with a simple tool that screenshots each page within the sandbox (could be done for example with few lines of AHK script).


Anybody with experience in using duckdb to quickly select page of filtered transactions from the single table having a couple of billions of records and let's say 30 columns where each can be filtered using simple WHERE clausule? Lets say 10 years of payment order data. I am wondering since this is not analytical scenario.

Doing that in postgres takes some time, and even simple count(*) takes a lot of time (with all columns indexed)


I've used duckdb a lot at this scale, and I would not expect something like this to take more than a few seconds, if that. The only slow duckdb queries I have encountered either involve complex joins or glob across many files.


I'm not so sure the common index algorithms would work to speed up a count. How often is the table updated? If it's often, and it's also queried often for the count, then run the count somewhat often on a schedule and store the result separately, and if it isn't queried often, do it more seldom.

From what you describe I'd expect a list of column-value pairs under a WHERE to resolve pretty fast if it uses indices and don't fish out large amounts of data at once.


plantuml


Try 1remote


Try flameshot


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: