Hacker Newsnew | past | comments | ask | show | jobs | submit | IceDane's commentslogin

How about reading the article?

It's staggering to me how many times I've heard this argument that LLMs are just the next level of abstraction. Some people are even comparing them to compilers.

> Some people are even comparing them to compilers.

A lot of people are using them as such too: the amount of people talking about "my fleets of agents working on 4 different projects": they aren't reviewing that output. They say they are, but they aren't, anymore than I review the LLVM IR. It makes me feel like I'm in some fantasy land: I watch Opus 4.7 get things consistently backwards at the margins, mess up, make bugs: we wouldn't accept a compiler that did any of this at this scale or level lol


Right? People have put in decades of work to make them extremely reliable, they didn't magically start like that.

It's awful, and seeing even engineers I respected become so AI pilled they're shipping slop without review has made me lose respect for them. It also can't help but make me wonder: what am I missing? Am I holding it wrong? Am I too focused on irrelevant details?

So far, my conclusion is that while LLMs can be s productivity boost, you have to direct them carefully. They don't really care about friction and bad abstractions in your codebase and will happily keep piling cards on top of the crooked house of cards they've generated.

Just like before AI, you need a cycle of building and refactoring running on repeat with careful reviews. Otherwise you will end up with something that even an LLM will have a hard time working in.


As much as I use AI, even for coding, I really do not like the argument. They are too chaotic to be compilers. The descent from prompt to code has far too many branches, and even small requests begin to build up bad patterns.

There is some fun to consider when sufficiently advanced AI allows this in areas where we are okay with things going wrong, but that seems a very limited domain for fun and games and not for serious software that needs to be correct as possible.

I can see vibe coding building very simple systems, and it likely will get better with systems that are one off throw aways where edge cases don't matter because we have a one off need of turning input X into output Y, but when it comes to people using AI in systems where correctness matters, long term support must be provided, and ease of adding new functionality is a serious consideration, it seems we are as far from having prompt as code as we are from AGI.


Outsource manual labor, not your brain.

Both?

I read pretty dense philosophy and the longer I live, the more I think the writers were just bad writers with good ideas. LLMs can convert poorly written sentences into clear sentences with examples.


This person is a card-carrying moron and has no idea how anything works. Even if we concede that maybe there should be some grace period or soft deletions or whatever..

Also, the post is 100% written by an LLM, which is ironic enough on its own. But that then makes it a bit more curious that you find this argument in this slop, because any LLM would say so. But if you badger it enough, it will concede to your demands, so you just know this clown was yelling at his LLM while writing this post.

He really should've thrown this post at a fresh session and asked for an honest, critical review.


This is the stupidest thing I've read for months, which is wild with the Trump admin and all the AI hype.

Not only do they blame all of this on a stupid tool, but they also clearly couldn't even write this themselves. This is so obviously written by an LLM. Then there's the moronic notion of having the LLM explain itself.

Was the goal of this post to sabotage the business? Because I can barely come up with anything dumber than this post. Nobody with a brain and basic understanding of computers and LLMs would trust this person after this.

PS: "Confirm deletion" on an api call??? Lol. How vehemently it is argued in spite of how dumb that is is a typical example of someone badgering the LLM until it agrees. You can get them to take any position as long as you get mad enough.


This is the worst form of Article I've ever seen. Did the author read this? Is there even really an author or did Chatgpt just write all of it and generate the page?


Okay, so this is just a text prompt that could have been actual UI elements where I select dimensions, fed to an LLM in the poorest way possible so it doesn't even work properly, where you ask it to kinda sorta solve a binpacking problem.

Imma pass.


I can scarcely imagine a way to formulate an argument that is better at convincing the reader that the author is a grumpy dude on the spectrum.


There is no way to autogenerate migrations that work in all cases. There are lots of things out there that can generate migrations that work for most simple cases.


Django manages to autogenerate migrations that work in the VAST majority of cases.


They don't need to work in every case. For the past `~15 years 100% of the autogenerated migrations to generating tables, columns or column names I have made just work. and i have made thousands of migrations at this point.

The only thing to manually migrate are data migrations from one schema to the other.


I end up needing to write a manual migration maybe once every other year in real world use.


That's why you can do your own migrations in Django for those edge cases.


It is blatantly obvious to anyone with just a little bit of experience that the reddit devs barely know what they are doing. This applies to their frontend as well as backend. For some reason, reddit is also the only major social network where downtime is expected. Reddit throwing 500 errors under load happens literally every week.


Presumably the mobile apps works better; they don’t care very much about the website because they want to push everyone to the app anyway.


Reddit also puts the "eventually" in "eventually consistent". Not in the sense of consistency being driven by events, but in the colloquial sense of "someday". The new message indicator will go away ... eventually. Your comment will show up ... eventually.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: