Hacker Newsnew | past | comments | ask | show | jobs | submit | sanxchit's commentslogin

What an amazing tool, wish it had a GUI version as well.


From the screenshot examples in the readme, I’m not sure how substantial the benefits are over GUI tools like Kdiff3 or WinMerge that have existed for ages.


I've used WinMerge, its very different from the above tool because its still a text diff. It often gets the diffs wrong when multiple lines are involved. Take this trivial C# example: https://www.diffchecker.com/gr0H7qA1/

Most diff tools throw out something that is quite difficult to parse, but difftastic gave me the most concise diff so far.


Leela Chess Zero is the open source successor of Alpha Zero, and has been the second strongest engine (except a few times where it bested stockfish) for the last few years.

[1] https://lczero.org/


While I agree with the point you made, I want to point out that the fatality rate for driving an automobile is much lower than 1 in a 1000. In the United States, the number of fatalities per vehicle-mile driven is approximately 1.5e-8[1]. On your average errand run, you have about a one in a hundred million chance of being killed/killing someone.

[1] - https://en.wikipedia.org/wiki/Transportation_safety_in_the_U...


By that stat, you have a 1:66M chance per mile, so I think one in a hundred million is at least an order of magnitude too low. Nowhere close to 1:1000, though.


Not the original commenter, but Firefox has noticeably poorer performance on my MacBook pro 2018, especially on react-heavy sites like the AWS Console or Twitch.tv


Mostly this, performance is atrocious on a lot of sites I use.

I also dislike its history management, download management, auto-complete, search in address bar functionality, pocket integration, and much more. I'm sure if I were forced to spend more time with it I could possibly find configurations to customize all those things in a way I like, but even using it for a few days the easily findable settings weren't flexible enough.


I realize your comment says you’ve already switched back, but should you ever try Firefox again most of these can be tweaked directly from about:config


I know, but not enough to my preferences at least from as far as I dug into those settings.


To me Chrome has poorer performance, especially when switching between tabs. I've seen Chrome take minutes to "load" a tab.


Yeah, there's an issue on Mac's with scaled resolutions. Not a problem on other platforms, but not sure why it's so sticky on the Mac.


Firefox has awful performance on nearly any web page for me (2014 Mac). Even its own settings page alone puts a core at 100%. I can get past bad font rendering and giving up pinch-to-zoom and not reading PDFs in the browser, but I can't give Firefox half my battery life. I'll use it on my work PC but it's still a huge waste of electricity.


Huh? I read all my PDFs in firefox.


Both Chrome and Firefox are noticeably heavier than Safari on macOS. The most obvious sign of that is battery usage.


Here’s a tip for saving battery on MacOS:

https://news.ycombinator.com/item?id=18048844


Having worked with databases for a while, SQL seems to be useful because it forces you to think about how your data is structured. To me SQL is a thin wrapper around the relational algebra notation. The biggest problem I run into with SQL is that it is hard to tell how performant a complex query is before actually running it.


Not a great interface, but estimated query plans can be pretty useful for getting a feel for how heavy a query will be.


Great writeup on data migrations. I was wondering whether you did a comparison for this method vs using AWS snowball[1] to export S3 Data and B2 Fireball[2] to ingest it.

[1] - https://docs.aws.amazon.com/snowball/latest/ug/create-export...

[2] - https://help.backblaze.com/hc/en-us/articles/360001918654


We looked briefly at the snowball and fireball, but wanted to do this as quickly as possible, whilst keeping the process entirely transparent to our users. It was also an excuse for our team to get intimately familiar with the B2 API, since it's not compatible with S3.

If we were to consider another large migration like this, physical media would probably be the way to go.


I evaluated moving 2PB with snowball v. putting in 10g/100g links. The issue with snowball ( i started a company that did what snowball did and shut it down (failed )) and other fedex/RAID solutions is that you have 3 transfers. You think the LAN transfer will be quick - but you're generally rate-limited by the systems more than you are by the bandwidth-delay product. If you're in a high traffic DC area, it's pretty easy to get temp bw or install circuits to carry that. 10g for 2pB is 18 days of transfer - which sounds like a lot - but that's 5 days of transfer on each site, 1 day of setup, and 1 week of shipping. Those numbers aren't real but they're close.

So, snowball works in a lot of area, but like so many AWS products, it works if you adapt to it.

pigz/scp/zstd works extremely fast in line.

In your case you're pulling from S3 to another object store.

I moved ~1PB from one S3 region to another. "Why not use replication," they asked. That only works if it's turned on when you upload the object - another fine-print 'gotcha' in the easy AWS service. Then you get into rate-limits. In 2010 I asked AWS if I could spin up 1000 servers to test something - nope - elastiticy at that level is for the big boys.

Now I work for a large cloud company and we still run into elasticity.

To move the 1PB from one S3 region to another we spun up hundreds of spot instances (oh, we were compressing and glacierizing it too) and built a perl/mysql batch job "s3 get | zstd | s3 put" process and parallelized it. One thing nice about S3 is it pulls the md5 hash - unless multipart, in which case it's the hash of the hash, oh yeah.... So you should split it in advance if you want to verify the hash (more fine print).

Worked great. Good for you for sharing this project, very cool.


"10g for 2pB is 18 days of transfer - which sounds like a lot - but that's 5 days of transfer on each site, 1 day of setup, and 1 week of shipping."

I can confirm this, to some degree.

We have larger customers with 20 or 40 or 80 TB of data to bring into rsync.net and everyone is always very interested in physical delivery, which we offer, but it's always easier to nurse along a 20-30 day transfer than ship JBODs around.

As long as you have a transfer mechanism that can be resumed efficiently (such as zfs send) and you don't have terribly bad bandwidth, we always counsel to just run the very long transfer. It does help that we are inside he.net in one location and two hops away from their core in two other locations and we can just order 10gb circuits on a days notice ... because he.net rocks.


I’ve never heard a bad thing about Hurricane Electric as a bandwidth/colo provider. Happy to hear their reputation persists.


As a physicist I've always found the name "elastic scaling" funny. If it's elastic in the physical sense, it means that the energy required to grow to some size is quadratic (or higher) in the size. The marketing meaning is "easy scaling", but the physical meaning is "really hard scaling".

E.g. compare a soap bubble versus a bubble gum bubble. It's a lot easier to scale up the soap bubble, which is not elastic.


It's a very good observation, and I think it's more than just a funny aside. The word 'elastic' connotes increasing resistance as the cluster grows, but this is a false intuition. From AWS's POV 'resistance' to adding a node is generally small, fixed, and, in general, independent of cluster size. I suspect this is what makes cloud computers in general, and EC2 in particular, such a cash-cow.

Moreover it turns out that elasticity is a very valuable quality of a cluster for most workloads; we want this intuition to be true, that our cluster meets resistance as it grows, in the sense that it will shrink when the workload decreases. This matches our economic intuition, too. We want this so much we have to build another software layer to make this happen - e.g., k8s.


"Quantum leap" is similarly misused to mean"big change" when it's physical meaning is smallest physically possible change"


We did a Snowball transfer of 150TB (mostly media files) from our on prem DC. Cost is one thing we really failed to plan on. You're charged per day you have the snowball (in our case 3 for 2 separate DCs).

During the transfer, the AWS sync constantly failed due to random issues which drove up the total time to transfer the files. Something like having a tilde (~) in the filename will totally break the sync. You really need to keep track of where it failed. We were constantly trying to craft additional rules into our sync logic to catch the 'gotchas'.

Another point you alluded to was the Etag/MD5sum that's stored in AWS. Pretty useful if you know how to use it...


B2 should allow you to ship your full snowball directly to them to offload


Skimming through the recipes, it seems to be mostly meat-based dishes. I would like to see a similar cookbook with plant based alternatives.


Toni Okamoto’s book “Plant Based on a Budget” is coming out soon and looks likely to fit the bill.

https://www.amazon.com/Plant-Based-Budget-Delicious-Recipes-...

I’ve been cooking entirely plant based for three years now and cook almost all my own meals. At first I was following recipes but now I just keep a well stocked cupboard and improvise. The results are a little variable but I enjoy cooking this way more. But the basis of my diet is legumes and whole grains with as many fresh fruits and vegetables as I can handle. It’s very cheap to eat this way.


Open a bag of Fritos. Dump in a cup of melted Velveeta, some pickled jalapeños, half a can of black beans. Mix it up and eat it out of the bag. Then, loathe yourself and wonder how it came to this.


Most of the loathing probably comes from the month's RDA of sodium described in your proposal.

Substitute some sort of more traditional cheese and some roasted unsalted peanuts in the above recipe and you'll probably enjoy it about as much and feel better afterwards.


Substitute Fritos for not Fritos to make the dish even better.


Roasted veggies. Garlic is your friend. Prep time zilcho.


You can definitely build a fully functional web app using just serverless. For an example, take a look at https://acloud.guru . Where I work, we almost exclusively use serverless, and I have found it to be incredibly reliable, and way more hands-off than a docker deployment.


You're not alone. I forced myself to read through half the book before asking myself what was the point of solving essentially leetcode style problems using only tail-end recursion and glorified linked lists. At no point did I ever feel that the scheme language had any practical applications.


Running functions in a VPC cause them to have cold starts of ~5 seconds. Connection pools need to be centralized to be of any use, since lambdas scale automatically.


I found myself in the middle of migrating a startup's entire "something" ingestion pipeline to be entirely done on Lambdas. For our purposes of one data block per every 10-20 minutes, and then processing and ingesting that data, Lambdas work perfectly, and about 10x cheaper than using ECS or EC2.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: