Altman called GPT-2 "too dangerous to release". Google tends to be much more measured even though they're the ones who tend to release the actual research breakthroughs
Much of the article and general palace intrigue is predicated on the idea that OpenAI has a singularly revolutionary product. If it later turns out to be a commodity, or OpenAI is simply outcompeted nonetheless, then the idea that Sam Altman's personal shortcomings are something to stress about would seem quaint. Just another hubristic tech billionaire acting in bad faith doesn't really pry attention the same way as someone "controlling your future".
But it was well understood that the subscription was heavily subsidized. Whether or not it was a "separate product" doesn't matter as much as the fact that pricing was not sustainable.
It was not well understood that it would stop being subsidized without notice.
Does that just not matter in modern society? I’m an asshole for expecting the product I pay for on day 1 to be the same on day 8 and 29 of a 30-days subscription?
Sure, you could argue that counterfactual, but how is Costco actually implicated? Does Costco have a contract with its members that sets a limit on the margins they can charge? If so, then I suppose they could get sued for breach of contract. If not, as I suspect, then on what grounds could you actually sue them? Just because you feel like a business charges too much doesn't mean you get to sue them.
I agree. I don’t think most of these products should be forking and maintaining a whole IDE.
That’s also not how I think about ctx. The UI is a workbench around agents, not a replacement for IntelliJ/VS Code. If you need deep code navigation, refactors, debugger-heavy work, etc., the right answer is usually to open the same worktree in your IDE.
ctx includes surfaces for diff review and an integrated terminal, but not code editing or a full-fledged IDE. It's not a fork of VSCode.
because you probably need both if you are doing guided agentic work. IDE gives you the familiar benefits, especially code nav. If you are using background agents or launching agents without reviewing their work, then I guess you dont need an IDE.
I have the exact same setup and editor for me is just neovim as I can easily see changes (lazygit) and make small tweaks. The only thing I’m missing in my workflow is some isolation for running claude so I can let it go without having to approve tools.
I mostly agree with this. Part of the confusion with the discourse around AI is the fact that "software engineering" can refer to tons of different things. A Next.js app is pretty different from a Kubernetes operator, which is pretty different from a compiler, etc.
I've worked on a project that went over the complexity cliff before LLM coding even existed. It can get pretty hairy when you already have well-established customers with long-term use-cases that absolutely cannot be broken, but their use-cases are supported by a Gordian Knot of tech debt that practically cannot be improved without breaking something. It's not about a single bug that an LLM (or human) might introduce. It's about a complete breakdown in velocity and/or reliability, but the product is very mature and still makes money; so abandoning it and starting over is not considered realistic. Eager uptake of tech debt helped fuel the product's rise to popularity, but ultimately turned it into a dead end. It's a tough balancing act. I think a lot of LLM-generated platforms will fall eventually into this trap, but it will take many years.
> It can get pretty hairy when you already have well-established customers with long-term use-cases that absolutely cannot be broken
LLMs are often poor at writing tests that provide useful information to human readers and poor at writing tests that can survive project evolution. To be fair, humans are also poor at these tasks if done in hindsight, after all the information you normally want to capture in tests has been forgotten. That boat has been missed for the legacy code no matter how you slice it. But LLMs are quite good at writing tests that lock in existing functionality in the rawest way. It seems like LLM-generation is actually the best hope of saving such a project?
reply