Signed into law in 2002, with the last reactor going offline in 2023. Depending on how you count we got rid of it a quarter-century ago.
Not the best decision, and a major reason why Germany uses so much coal and gas today. But outside some special circumstances nuclear isn't cost competitive with other renewables anymore, so for future plans it doesn't really matter
Would be karma for all the unnecessary flights we have taken as a species.
In particular anyone who does 'mileage runs' and emits huge amounts of CO2 just so they have the 'privilege' to sit in a slightly nicer chair in a dull airport lounge.
>In particular anyone who does 'mileage runs' and emits huge amounts of CO2 just so they have the 'privilege' to sit in a slightly nicer chair in a dull airport lounge.
I doubt anyone is doing this? At best they're grinding out flights so they can get free first/business class seats later.
People do this to meet minimum requirements for mileage tiers, e.g. I know someone who was close to Diamond status on Delta and went to Miami and back without leaving the airport area just for the miles.
Look on the Flyertalk BA forum 2005-2020. Was a huge thing and not always for upgrades, because BA have been stingy with upgrades for a long long time. Lounge access/baggage/priority boarding etc was a huge part of it
Popular mileage runs were London to Honolulu with lots of sectors on the way iirc !!
This explains why China's defense capabilities are outpacing the west in 2026. The defense behemoth who castrates users by denying them the all-powerful TrackPoint will be doomed to irrelevance very soon.
100% agreed. I wish someone would make a test for how reliably the LLMs follow tool use instructions etc. The pelicans are nice but not useful for me to judge how well a model will slot into a production stack.
At first when I got started with using LLMs I read/analyzed benchmarks, looked at what example prompts people used and so on, but many times, a new model does best at the benchmark, and you think it'll be better, but then in real work, it completely drops the ball. Since then I've stopped even reading benchmarks, I don't care an iota about them, they always seem more misdirected than helpful.
Today I have my own private benchmarks, with tests I run myself, with private test cases I refuse to share publicly. These have been built up during the last 1/1.5 years, whenever I find something that my current model struggles with, then it becomes a new test case to include in the benchmark.
Nowadays it's as easy as `just bench $provider $model` and it runs my benchmarks against it, and I get a score that actually reflects what I use the models for, and it feels like it more or less matches with actually using the models. I recommend people who use LLMs for serious work to try the same approach, and stop relying on public benchmarks that (seemingly) are all gamed by now.
Would you be willing to give a rough outline of one or a few test cases? I am having a bit of a hard time imagining what and how you are testing. Is it like "change the signature of function X in file @Y to take parameter Z" and then comparing the result with what you expect?
So why is Claude not cheaper than ChatGPT? Why won't they let me remove my payment info afterwards? Most other platforms like Steam let you do that. I don't want my shit sitting there waiting for the inevitable breach.
Everything is perception though. You are looking at this with your own perception, biases, and heuristics just like everyone else. There is no 'right' way to hire.
reply