been thinking the same, but I imagine you could explicitly separate notes and slop, eg something as simple as cron job that goes through all your notes and creates a PR if there's some easy win: typos, inconsistencies, tags, etc
I've been coding like this lately: if I'm too lazy to review a new non-critical section/unit tests, I'll mark it as `// SLOP`; later, if I have to, I'll go through the entire thing, and unmark
shitty tests are better than no tests, as long as you your expectations are low enough
so it's ok to say "SSD read/write speed", but now that we have something closer to the original meaning of the word, someone always has to point out that "LLMs don't have a soul" (or whatever you think is required for it to count as akchyually reading)
If I can just stand up for the nitpicker - arguably in the uncanny valley it’s more natural to point out it’s not reading (by their definition) than outside it (ssd’s).
makes sense in a philosophical debate or when you're talking to your confused grandparents, but does anyone on hn not know how LLMs work, at least on the level of "tokens, matrices, data, sgd"?
otherwise, that reminder must imply that people do know how it works, and yet they still ascribe to these models some property like qualia, i.e. something other than "being able to turn english into code and compute into shareholder value";
but then if you disagree, why even mention it in the first place? do atheists randomly proclaim "btw god isn't real!" in unrelated conversations with strangers of unknown religious beliefs?
Yeah but ultimately it's all just function approximation, which produces some kind of conditional average. There's no getting away from that, which is why it surprises me that we expect them to be good at science.
They'll probably get really good at model approximation, as there's a clear reward signal, but in places where that feedback loop is not possible/very difficult then we shouldn't expect them to do well.
weird, for me it was too un-human at first, taking everything literally even if it doesn't make sense; I started being more precise with prompting, to the point where it felt like "metaprogramming in english"
claude on the other hand was exactly as described in the article
torturing a model with human stupidity probably doesn't align with their position on model welfare; wondering if they tried bullying it into hacking its way out of the slop gulag
They weren't claiming it was dangerous because "AGI soon", that didn't come until later.
OpenAI were claiming GPT-2 was too dangerous because it could be used to flood the internet with fake content (mostly SEO spam).
And they were somewhat right. GPT-2 was very hard to prompt, but with a bit of effort it could spit out endless pages that were good enough to fool a search engine, and even a human at a first glance (you were often several paragraphs in before you realised it was complete nonsense.
reply