I actually enjoyed the writing. It's clearly reflection on the experience presented as an "advice list" somewhat jokingly. Since author didn't enjoy the experience, tone is somber. After spending childhood in the cold place I can relate.
Yes, on some credit cards it's actual 2% cash - Apple Credit Card, Fidelity.
Amazon gives you 5% back for using their credit card, it's criminal not to use it.
If you buy a lot of equipment or expensive equipment - B&H credit card covers sales tax! I.e. 10% for my area! (I don't use it since I don't buy that much, but still it's an option)
My friend, as a rule of thumb, every additional player im a transaction takes a cut.
So assuming the rest is all the same, you just paid exactly what you would've paid with a debit card. Because the merchant had to raise prices to accommodate the fee. And that's with the credit card company not taking a cut and we all know that's not true.
The merchant chose to not offer a lower debit card / cash price because the merchant bets that people will pay a higher price if they use credit cards, so the merchant incentivizes credit card usage by asking for the same price for credit card and non credit card payment.
There are merchants that do not do this, such as Target, which charges 5% to use a credit card. Insurers/tutors/daycares/schools/healthcare providers/contractors/gas stations/restaurants/governments/utilities are also known to frequently charge more for credit card payments.
Any seller can choose to offer a lower price for debit card / ACH / Zelle payments if they want to.
Even ignoring the cut taken by the credit card issuer, why do I have to go through some random card to get a 2% discount, when prices could just be 2% lower across the board by default?
Are you using Claude Code? Because that might be the secret cause you're missing. With Claude Code I can instruct it to validate things after its done with code, and usually it finds that it goofed. I can also tell it to work on like five different things, and go "hey spin up some agents to work on this" and it will spawn 5 agents in parallel to work on said things.
I've basically ditched Groke et al and I refuse to give Sam Altman a penny.
For schema design phase I used web UI for all three.
Logical bug of using BIGSERIAL for tracking updates (generated at insert time, not commit time, so can be out of order) wouldn’t be caught by any number of iterations of Claude Code and would be found in production after weeks of debugging.
At this point having any LLM write code without giving it an environment that allows it to execute that code itself is like rolling a heavily-biased random number generator and hoping you get a useful result.
Things get so much more interesting when they're able to execute the code they are writing to see if it actually works.
So much this. Do we program by writing reams of code and never running the compiler until it's all written and then judging the programmer as terrible when it doesn't compile? Or do we write code by hand incrementally and compile and test as we go along? So why would do we think having the AI do that and fail is setting it up for success? If I wrote code on a whiteboard and was judged for making syntax errors, I'd never have gotten a job. Give the AI the tools it needs to succeed, just like you would for a human.
> Do we program by writing reams of code and never running the compiler until it's all written and then judging the programmer as terrible when it doesn't compile?
DB last year: I took train last year from Frankfurt airport to Bonn only to see Bonn sail past the window and train to go to Cologne.
I asked locals what is going on, turns out that all trains were late, and this train departed from the platform already marked for Bonn! “You should watch what train number you board on DB, not trust sign on platform!” locals helpfully advised me.
Any non-trivial amount of data and you’ll run into non-trivial problems.
For example, some of our pg databases got into such state, that we had to write custom migration tool because we couldn’t copy data to new instance using standard tools. We had to re-write schema to using custom partitions because perf on built-in partitioning degrades as number of partitions gets high, and so on.
It was high voltage but low current. I touched high-voltage circuit in the back of TV accidentally while poking in it as a teen, and while it was quite unpleasant, all it did was burn a hole in the skin of my finger. It eventually healed.
Very cool, what’s the temperature range/wavelengths? (good idea to specify it on the product page - otherwise it’s unclear how is it different from other lightbulbs)
The bulb ranges from 1700K to 2100K (it warm dims)
Atmos ranges from 1800K to 5700K
Maybe not the most obvious, but for both products, it’s in the tech specs under Quality of Light. We try to be very detailed with what we publish there. Thanks!
Indeed there are very detailed specs on the bottom of the page!
It’s not obvious because I didn’t get there - I expected it to be one of the expanding sections with “Product Details” and so on. (I.e. when you have expanding sections to start with, it’s standard that all the information is the sections, and users are trained not to scroll down).
reply