Maybe to get a real breakthrough we have to make programming languages / tools better suited for LLM strengths not fuss so much about making it write code we like. What we need is correct code not nice looking code.
> programming languages / tools better suited for LLM strengths
The bitter lesson is that the best languages / tools are the ones for which the most quality training data exists, and that's pretty much necessarily the same languages / tools most commonly used by humans.
> Correct code not nice looking code
"Nice looking" is subjective, but simple, clear, readable code is just as important as ever for projects to be long-term successful. Arguably even more so. The aphorism about code being read much more often than it's written applies to LLMs "reading" code as well. They can go over the complexity cliff very fast. Just look at OpenClaw.
I guess it's hard to tell until we see more long-term AI-generated project, but many of the ones we have so far (OpenClaw and OpenCode for instance) are well-known for their stability issues, and it seems "even more AI" is not about to fix that.
I have never met an uncreative kid, and studies show kids tend to be more open and creative. But I have to admit I haven't met and interacted with that many average kids, so there maybe some that aren't creative, but a majority are.
> Salespeople sell things that already exist. If you can envision new things that would sell well, that's a bit more than sales talent
A lot of gadgets that were claimed by Steve Jobs to have been envisioned by Apple (or rather: by him) - as I wrote: Steve Jobs was an exceptional salesman - already existed before, just in a way that had a little bit more rough edges. These did not sell so well, because the companies did not have a marketing department that made people believe that what they sell is the next big thing.
> Jobs envisioned the iPad and iPhone. [...]
Everyone around him at that time has commented on this. Are you going to claim they’re all lying?
I don't claim that they are all lying, but I do claim that quite some people fell for Apple's marketing (as I wrote: "Jobs' talent was that he was an incredibly talented salesman.").
They are not comparable, ffmpeg-python just abstracts away the CLI, pyav is a low level binding of the ffmpeg libs.
It may seem "dead" but ultimately it just helps you build CLI commands in a more sane way, the CLI interface to ffmpeg has been consistent for a long time. Only thing that may change is individual filters which you can just give raw to ffmpeg-python.
I remember when I was heavily using it last year I found a fork that seemingly had more sane typing or something but since LLMs last year didn't know about the newer lib but could write decent ffmpeg-python code I stuck with it and it did the job.
And that same information contained in an LLM is a compression of how many terabytes of training data? Maybe in the future there will be models an order of magnitude smaller and still better performing.
What I'm saying is you can't judge the data in the genome by purely counting the bytes of data.
This has happened in online chess, with some people admitting to using engines (ie cheating) to "confirm their suspicion that the other guy is cheating".
reply