Hacker Newsnew | past | comments | ask | show | jobs | submit | scottLobster's commentslogin

Or they don't see the problem. Someone's paying 600-900k to live in a townhouse 1000 ft from the runways at Dulles Airport

https://www.zillow.com/homes/for_sale/?category=SEMANTIC&sea...


Reminds me of former Toledo Mayor Carty Finkbeiner’s suggestion that deaf people buy homes near the airport.

https://archive.seattletimes.com/archive/19941105/1939991/oh...


Because I don't choose what tools are available on every server at work, and it's guaranteed that at the very least old-school vi is installed on every linux server, and often vim. Maintaining that muscle memory is useful.

I used to think this too, but I routinely switch back and forth between neovim and vim now for close to a decade, and I've never noticed. In fact I often don't even notice which one I'm using unless I explicitly check. Once you add neovim-only plugins that can change of course, but if you can't choose what tools are available on the server then I would imagine you're not installing plugins anyway.

I'll add "craftsmanship". It isn't just delivering "A" finished product, you want to deliver a "good", if not "the best", finished product.

I guess if you're in an iterative MVP mindset then this matters less, but that model has always made me a little queasy. I like testing and verifying the crap out of my stuff so that when I hand it off I know it's the best effort I could possibly give.

Relying on AI code denies me the deep knowledge I need to feel that level of pride and confidence. And if I'm going to take the time to read, test and verify the AI code to that level, then I might as well write most of it unless it's really repetitive.


I don't think AI coding means you stop being a craftsman. It is just a different tool. Manual coding is a hand tool, AI coding is a power tool. You still retain all of the knowledge and as much control over the codebase as you want, same with any tool.

It's a different conversation when we talk about people learning to code now though. I'd probably not recommend going for the power tool until you have a solid understanding of the manual tools.


It can be a power tool if used in a limited capacity, but I'd describe vibe-coding as sending a junior construction worker out to finish a piece of framing on his own.

Will he remember to use pressure treated lumber? Will he use the right nails? Will he space them correctly? Will the gaps be acceptable? Did he snort some bath salts and build a sandcastle in a corner for some reason?

All unknowns and you have to over-specify and play inspector. Maybe that's still faster than doing it yourself for some tasks, but I doubt most vibe-coders are doing that. And I guess it doesn't matter for toy programs that aren't meant for production, but I'm not wired to enjoy it. My challenge is restraining myself from overengineering my work and wasting time on micro-optimizations.


Meanwhile Linus argued against Debuggers in 2000: https://lwn.net/2000/0914/a/lt-debugger.php3

But then he changed his tune? Even on LLMs...


> I'll add "craftsmanship". It isn't just delivering "A" finished product, you want to deliver a "good", if not "the best", finished product.

I don't raise a single PR that I feel I wouldn't have written myself. All the code written by the AI agent must be high quality and if it isn't, I tell it why and get it to write bits again, or I just do it myself.

I'm having quite a hard time understanding why this is a problem for other people using AI. Can you help me?


If you take the time to read the code and understand it to that level, great. But that sort of belies the promise of vibe-coding, where all software engineers essentially become PMs to a bunch of agents.

I use AI to extract information from documentation and write me bespoke examples, but I'd never feel good relying on code it actually generated without extremely thorough testing and review.


> If you take the time to read the code and understand it to that level, great. But that sort of belies the promise of vibe-coding, where all software engineers essentially become PMs to a bunch of agents.

But why would I do vibe coding? I am releasing this code to production systems that will bring the company down if there is a significant error. And my human peers will give me hell for raising terrible code for review.

I have a helpful, endlessly patient junior engineer with superhuman typing speed who will take all of my advice and apply it exactly as a I want it, and write my code for me. When I see errors, I'll tell it, and I'll even ask it to remember why it is a problem in our code base (maybe not others). So it has memory and (mostly) won't do that again.

And I also make sure to apply the same quality to the tests we write together.

Over the last few months I'd say between 50-80% of code being delivered to our repo is "typed" by agents. Humans are still guiding them and ensuring the quality meets our high standards.

I don't really have a grasp on how other people are working with this stuff that they're seeing problems with production code.


You can use (and create) tools to codify what you think of as "quality".

There's the new frontier for delivering good or the best products. Less relying on the feels of an experienced programmer and more configuring and creating deterministic tools to define quality.

Unless you get actual joy and enjoyment from writing 42 unit tests for a CRUD API with slight variations for each test. Then go ahead =)


That's a really good point. And I agree that kind of confidence in craftsmanship is something that's missing from agentic coding today... it does make slop if you're not careful with it. Even though I've learned how to guide agents, I still have some uneasiness about missing something sloppy they have done.

But then it makes me ask if the agents will get so good that craftsmanship is a given? Then that concern goes away. When I use Go I don't worry too much about craftsmanship of the language because it was written by a lot of smart people and has proven itself to be good in production for thousands of orgs. Is there a point at which agents prove themselves capable enough that we start trusting in their craftsmanship? There's a long way to go, but I don't think that's impossible.


I would argue that craftsmanship includes a thorough understanding and cognitive model of the code. And, as far as I understand it, these agents are syntactic wonders but can not really understand anything. Which would preclude any sort of craftsmanship, even if what they make happens to be well-built.

Maybe the internet has made me too cynical, and I'm glad people seem to be having a good time, but at time of posting I can't help but notice that almost every comment here is suspiciously vague as to what, exactly, is being coded. Still better than the breathless announcements of the death of software engineering, but quite similar in tone.

The other week I used Copilot to write a program that scans all our Amazon accounts and regions, collects services and versions, and finds the ones going EOL within a year. The data on EOL dates is scraped from several sources and kept in JSON. There's about 16 different AWS API calls used. It generates reports in markdown, json, and csv, so humans can read the markdown (flags major things, explains stuff), and the csv can be used to triage, prioritize, track work over time. The result is deduplicated, sorted, consolidated (similar entries), and classified. I can automatically send reports to teams based on a regex of names or tags. This is more data than I get from AWS Health Dashboard, and can put it into any format I want, across any number of accounts/regions.

Afaik there are no open source projects that do this. AWS has a behemoth of a distributed system you can deploy in order to do something similar. But I made a Python script that does it in an afternoon with a couple of prompts.


> almost every comment here is suspiciously vague as to what, exactly, is being coded

Why? You don't trust a newly-created account that has not engaged with any of the comments to be anything but truthful?


In my experience, I have "vibe coded" various tools and stuff that, while nice to have, isn't really something I need or brings a ton of value to me. Just nice-to-haves.

I think people enjoy writing code for various reasons. Some people really enjoy the craft of programming and thus dislike AI-centric coding. Some people don't really enjoy programming but enjoy making money or affecting some change on the world with it, and they use them as a tool. And then some people just like tinkering and building things for the sake of making stuff, and they get a kick out of vibe coding because it lets them add more things to their things-i-built collection.


I will say that I grieve the passing of 'coding', per se. I used to love getting the flow, envisioning the data flows and object structures and cool mechanisms, refactoring to perfection. I truly miss it.

But the payoff for letting that go is huge.


> I used to love getting the flow, envisioning the data flows and object structures and cool mechanisms, refactoring to perfection.

You still have to do this, LLM's are still quite bad at choosing the right data structures and dealing with their interdependent relationships.


Yes. I never really see people say wtf they're making. It's always "AI bot wrote 200k lines of code for me!" Alright, cool. Is the project something completely new? Useful? A rushed remake of a project that already exists in GitHub with actual human support behind it? I never see an answer.

I wrote SuperSecretCrypt.com, ScoreRummy.com. Other stuff, too.

I have integrated Claude Code with a graph database to support an assistant with structured memory and many helpful capabilities.

I have clients. I automated a complicated data ingestion pipeline into a desktop app with a bulletproof process queue, localhost control panel and many features.

For another, I am writing an AI-specific app that is so cool. I wish I could tell you about it but it's definitely not a rushed remake of anything.

I hope that helps.


> SuperSecretCrypt.com

Is down. And the scoring one, no offense, seems like a project a junior would make to pad out their resume/portfolio. Nothing wrong with that of course, but I fail to see how this translates to all the hype being thrown around.


SuperSecretCrypt.com doesn't work.

I am currently using a Claude skill that I have been building out over the last few days that runs through my Amazon PPC campaigns and does a full audit. Suggestions of bid adjustments, new search terms and products to advertise against and adjustment to campaign structures. It goes through all of the analytics Amazon provides, which are surprisingly extensive, to find every search term where my product shows up, gets added to cart and purchased.

It's the kind of thing that would be hours of tedious work, then even more time to actually make all the changes to the account. Instead I just say "yeah do all of that" and it is done. Magic stuff. Thousands of lines of Python to hit the Amazon APIs that I've never even looked at.


And it doesn't freak you out that you're relying on thousands of lines of code that you've never looked at? How do you verify the end result?

I wouldn't trust thousands of lines of code from one of my co-workers without testing


> And it doesn't freak you out that you're relying on thousands of lines of code that you've never looked at?

I was a product manager for 15 years. I helped sell products to customers who paid thousands or millions of dollars for them. I never looked at the code. Customers never looked at the code. The overwhelming majority of people in the world are constantly relying on code they've never looked at. It's mostly fine.

> How do you verify the end result?

That's the better question, and the answer is a few things. First, when it makes changes to my ad accounts, I spot check them in the UI. Second, I look at ad reporting pretty often, since it's a core part of running my business. If there were suddenly some enormous spike in spend, it wouldn't take me long to catch it.


It's thousands of lines of variation on my own hand-tooling, run through tests I designed, automated by the sort of onboarding docs I should have been writing years ago.

I've been doing agentic work for companies for the past year and first of all, error rates have dropped to 1-2% with the leading Q3 and Q4 models... 2026's Q1 models blowing those out the water and being cheaper in some way

but second of all, even when error rates were 20%, the time savings still meant A Viable Business. a much more viable business actually, a scarily crazy viable business with many annoyed customers getting slop of some sort, with a human in the loop correcting things from the LLM before it went out to consumers

agentic LLM coders are better than your co-workers. they can also write tests. they can do stress testing, load testing, end to end testing, and in my experience that's not even what course corrects LLMs that well, so we shouldn't even be trying to replicate processes made for humans with them. like a human, the LLM is prone to just correct the test as the test uses a deprecated assumption as opposed to product changes breaking a test to reveal a regression.

in my experience, type errors, compiler errors, logs on deployment and database entries have made the LLM correct its approach more than tests. Devops and Data science, more than QA.


Why wouldn't you test? That sounds like a bad thing.

Me? I use AI to write tests just as I use it to write everything else. I pay a lot of attention to what's being done including code quality but I am no more insecure about trusting those thousands of tested lines than I am about trusting the byte code generated from the 'strings of code'.

We have just moved up another level of abstraction, as we have done many times before. It will take time to perfect but it's already amazing.


So people don't look at the code, or the tests.

So they don't know if it has the right behavior to begin with, or even if the tests are testing the right behavior.

This is what people are talking about. This is why nobody responsible wants to uberscale a serious app this way. It's ridiculous to see so much hype in this thread, people claiming they've built entire businesses without looking at any code. Keep your business away from me, then.


Do you trust the assembly your compiler puts out? The machine code your assembler puts out? The virtual machine it runs on? Thousands of lines of code you've never looked at...

None of that is generated by an LLM prone to hallucination and is perfectly deterministic unless there's a hardware problem.

And yes, I have occasionally run into compiler bugs in my career. That's one reason we test.


> None of that is generated by an LLM

How did you verify that?

> prone to hallucination

You know humans can hallucinate?

> is perfectly deterministic

We agree then that you can verify, test, and trust the deterministic code an LLM produces without ever looking at it.

> That's one reason we test

That's one way we can trust and verify code produced by an LLM. You can't stop doing all the other things that aren't coding.

I get there's a difference. Shitty code can be produced by LLMs or humans. LLMs really can pump out the shitty code. I just think the argument that you cant trust code you haven't viewed is not a good argument. I very much trust a lot of code I've never seen, and yes I've been bitten by it too.

Not trying to be an ass, more trying to figure out how im going to deal for the next decade before retirement age. Uts going to be a lot of testing and verification I guess


> How did you verify that?

The compiler works without an internet connection and requires too little resources to be secretly running a local model. (Also, you can’t inspect the source code.)

> You know humans can hallucinate?

We are talking about compilers…

> We agree then that you can verify, test, and trust the deterministic code an LLM produces without ever looking at it.

Unlike a compiler, an LLM does not produce code in a deterministic way, so it’s not guaranteed to do what the input tells it to.


It is for me because the LLM makes my ability to evaluate super, too.

Compiler theory and implementation is based on mathematical and logic principles. And hence much more provable and trustworthy than a LLM thats stitching together pieces of text based on ‘training’

"Trust"? God no. That's why I have a debugger

Also you really do have to know how the underlying assembly integer operations work or you can get yourself into a world of hurt. Do they not still teach that in CS classes?

Some _fun_ stuff i "coded" in a day each just in last couple weeks:

https://hippich.github.io/minesweeper/ - no idea why but i had a couple weeks desire to play minesweeper. at some point i wanted to get a way to quickly estimate probability of the mine presence in each cell.. No problem - copilot coded both minesweeper and then added probabilities (hidden behind "Learn" checkbox) - Bonus, my wife now plays game "made" by me and not some random version from Play store.

another one made in a day - https://hippich.github.io/OpenCamber - I am putting together old car, so will need to align wheels on it at some point. There is Gyraline, but it is iOS only (I think because precision is not good enough on Android?). And it is not free. I have no idea how well it will work in practice, but I can try it, because the cost of trying it is so low now!

yes, both of these are not serious and fun projects. unlikely to have any impact. but it is _fun_! =)


It's also usually from people who stopped coding and haven't kept their skills up.

Or have no more skin in the game, retirement.

In the past month, in my spare time, I've built:

- A "semantically enhanced" epub-to-markdown converter

- A web-based Markdown reader with integrated LLM reading guide generation (https://i.imgur.com/ledMTXw.png)

- A Zotero plugin for defining/clarifying selected words/sentences in context

- An epub-to-audiobook generator using Pocket TTS

- A Diddy Kong Racing model/texture extractor/viewer (https://i.imgur.com/jiTK8kI.png)

- A slimmed-down phpBB 2 "remake" in Bun.js/TypeScript

- An experimental SQLite extension for defining incremental materialized views

...And many more that are either too tiny, too idiosyncratic, or too day-job to name here. Some of these are one-off utilities, some are toys I'll never touch again, some are part of much bigger projects that I've been struggling to get any work done on, and so on.

I don't blame you for your cynicism, and I'm not blind to all of the criticism of LLMs and LLM code. I've had many times where I feel upset, skeptical, discouraged, and alienated because of these new developments. But also... it's a lot of fun and I can't stop coming up with ideas.


The combination of the internet and how insanely pushed every single facet of AI bullshit is has made me incredibly cynical. I see a post like this reach the top of HN by a nobody, getting top votes and all I can think is that this is once again, another campaign to try and make people feel better about AI.

Every time I've asked people about what the hell they're actually doing with AI, they vanish into the ether. No one posts proof, they never post a link to a repo, they don't mention what they're doing at their job. The most I ever see is that someone managed to vibe code a basic website or a CRUD app that even a below-average engineer can whip up in a day or two.

Like this entire thread is just the equivalent of karma farming on Reddit or whatever nonsense people post on Facebook nowadays.


Yes and they all mention Claude as if it's the only LLM that can code.

I wrote SuperSecretCrypt.com, ScoreRummy.com. Other stuff, too.

I have integrated Claude Code with a graph database to support an assistant with structured memory and many helpful capabilities.

I have a freelance gig with a startup adapting AI to their concept. I have one serious app under my belt and more on the way.

Concrete enough?


SuperSecretCrypt.com doesn't work.

think about why anybody would ever associate a production level product with slop when consumers are polarized towards generative AI

this site gets indexed

there are too many disincentives to cater specifically to your suspicion and cynicism


Also scientists generally suck at messaging and persuasion. They think if they just dial up the stakes and consequences a little more, it'll be compelling! Maybe if we make one more documentary with bad CGI disaster movie scenes, that'll do it! Same with the stupid "Doomsday clock" that is somehow always "the closest we've ever been to nuclear war!" whenever it gets trotted out. You'd think people who know what stochastic noise is would realize when they're producing it.

They would have made a lot more headway talking about clean air, clean water, jobs, and a bright prosperous future where we manufacture wind turbines, batteries and solar panels in deep red Missouri. A minority tried that, but most stuck with the catastrophizing for decades and now that they've ruined their social credit no one will listen to the message they should have opened with.

You need people emotionally invested, and it's a lot easier to get them invested in their lives than in the abstract consequences of computer models that are at least 100 years out if they're even accurate. And most people are not independent enough to direct their own lives. If they make the right decisions on abstract concepts, it was more because the incentives/disincentives in their environment were set up correctly than they actually understood the decision they were making. Message accordingly.


Every approach you've suggested above, and others besides, has been tried by scientists, NGOs and government agencies for the last few decades, and largely failed.

The IPCC has consistently DOWNPLAYED the negative consequences of climate change, and reality keeps outpacing their worst case predictions year after year.

Every attempt to message the reality and consequences of climate change, and the possible avenues for blunting it, has been tried. From the sugarcoating "everything will be rosy and great and abundant, look at all the benefits of green industry" to the milquetoast watered-down try-to-please-everyone messaging of the major political parties, to the desperate attempts to communicate the brutal reality of what we're facing (and still failing to match the reality that is consistently worse).

None of it works.

1) People are selfish, myopic, and stupid. They think about their short term personal needs and wants above all else. Large scale coordination on this issue is virtually impossible, see the Prisoner's Dilemma. Human psychology is simply not fit for this task.

The satisficing nature of evolution means we are the dumbest possible animal that could otherwise achieve the technological civilization that we have, and this is another example where it really shows.

2) The wealthiest and most powerful people and corporations on Earth have spent decades pushing propaganda attempting to sow doubt about climate change, because genuine action on it is directly against their interests.

Those poor multi-trillion dollar industries underpinning all modern society and power structures are the altruistic, honest bastions of truth, it's those evil corrupt post docs on minimum wage that are the truly corrupt and greedy ones, twisting the truth for their own financial gain and machiavellian ends!

And they've been far more effective than the cigarette companies of the early 20th century could have ever dreamed.


> The IPCC has consistently DOWNPLAYED the negative consequences of climate change, and reality keeps outpacing their worst case predictions year after year.

Except downplaying consequences downplays the upside, i.e. opportunity.

The dire warnings should be dire, but paired with a call to opportunity opportunity opportunity. Instead of focusing on enemies or little inconvenient things we could all do, if only we could all be uniformly focused and high minded enough as uncoordinated individuals.

That fact that virtually every way we can reduce climate damage involves new capabilities and resources with additional economic and health benefits (not to mention political disentanglements) makes positive self-interested calls to profitable action much more sensible.

And political leaders shouldn't be afraid to work with the CFO's of fossil fuel companies to create incentives they want. It might be costly, but CFO's get flexible when there is a clear path to making more money. Any costs of smoothing that path (let's be clear, in a way that would be pure corruption if the size of the problem didn't make that a value creator) are nothing compared to the costs of climate change.

China gets it. (Not uniformly of course, but more, and its paying off for them.)


To be fair, he sounds a little more intense than your average player:

"A psychologist concluded that Friedmann suffered from dysthymia, today known as persistent depressive disorder, and schizoid personality disorder, writing that he showed “indifference to social relationships and a restricted range of emotional experience and expression.” He also had a tendency to “blur fantasy with reality.”

Friedmann was convicted of armed robbery and attempted murder"


"If you aim well enough" is doing a ton of work there. Precise real-time optical tracking of a satellite from a moving platform is an extremely difficult problem. Even if the satellite itself is geostationary, it would also have to rotate to keep the "cylinder" pointed in the right direction to maintain signal.

I suppose you could make a "cylinder" or "cone" broad enough that, if the threat was static, could blot-out attempted jamming from only certain regions while staying open facing toward friendly zones.


It's a geostationary sat. It doesn't move.

No, but the airplane it would be talking to does. Hard enough when your transceiver is wide open, if you narrow your FOV to a thin cone in order to block jamming signals, the GEO now has to physically track the airplane somehow.

Either the whole satellite rotates or the transciever is on a mount that can rotate


Unless you plan on having 1 satellite per airplane, something tells me it's harder to constrain the FOV than you might suggest. There's also the small problem of the energy, complexity, & weight of having motorized parts on the satellite (or fine-grained attitude control for the satellite itself to track the craft).

Agreed, my point is it's a lot harder than tiagod made it sound.

It also doesn't account for some kind of mobile jammer making it inside the cone, particularly if it's staring at an adversarial nation where secure comms would be needed the most, but the adversary would have freedom of movement.


Welcome to the return of history. This is hardly the first time or industry where the US government has forced compliance that wasn't necessarily in the public interest.

And the corporations won't fight this. They're in it for the money and they're willing to bring actual gold bars to White House to ensure it keeps rolling in. They know what they're doing is corrosive and debasing, the more conscientious of them probably want to vomit on the inside. But they mostly suck it up and do it anyway, for their investors will discipline them if they don't.

Either people run candidates and vote for the ones that campaign on stopping this, or it happens.


> for their investors will discipline them

Worse: they'll be sacked and replaced with someone who will.

Like Trump's FCC chair was saying he'll revoke the license of stations that make republicans look bad. Those stations will then be replaced with more copies of Newsmax. CBS either toes the line or it gets shut down and replaced by a station that will.


CBS is owned by Trump supporters. They aren't being forced against their will, they are acting on their own political motivation.


> the return of history

The idea that somehow the current actions are 'real' history and what people were doing before is fake just feeds the claim of inevitabiility, a basic psyops maneuver - you can't win; our victory is inevitable.

People have made history for centuries of Enlightenment - the whole idea is that we can control our fates as individuals through reason and compassion (humanism), and we have done it. We have transformed the world. The only problem is people giving up - despite the incredible success of this idea over centuries - and accepting that they can't control their fate. Certainly MAGA-ish conservatives believe they can make history.


"The End of History and the Last Man" was a book written in 1992 about the end of the Cold War and how previous historical patterns no longer applied, and Western liberal democracy would sweep the world and usher in a world of peace.

The "return of history" is snarkily pointing out how historical trends have been reasserting themselves, and Fukuyama (the author) was, at best, overly optimistic


Fair enough, but,

> historical trends have been reasserting themselves

Western liberalism is an historical trend, just like the others.


The parent was likely referencing the idea that the end of the Cold War represented "the end of history".

https://en.wikipedia.org/wiki/End_of_history


Also hiring. It's easier to find people with JIRA experience than people in your vibe-coded ticket manager, even if it is technically superior for your application.

If there is any commonality between the 3D printing craze and vibe-coding, they're both renditions of "just because you can, doesn't mean you should".


The claimed commonality is "early maximalist optimism turns into mature niche adoption."

Could be different this time around, or could be that the early naive optimism is just more widespread.


> Every frontier tech company is convinced that the tech they are working towards is as humanity-useful as a cure for cancer, and yet as dangerous as nuclear weapons.

Maybe some of the more naive engineers think that. At this point any big tech businesses or SV startup saying they're in it to usher in some piece of the Star Trek utopia deserves to be smacked in the face for insulting the rest of us like that. The argument is always "well the economic incentive structure forces us to do this bad thing, and if we don't we're screwed!" Oh, so ideals so shallow you aren't willing to risk a tiny fraction of your billions to meet them. Cool.

Every AI company/product in particular is the smarmiest version of this. "We told all the blue collar workers to go white collar for decades, and now we're coming for all the white collar jobs! Not ours though, ours will be fine, just yours. That's progress, what are you going to do? You'll have to renegotiate the entire civilizational social contract. No we aren't going to help. No we aren't going to sacrifice an ounce of profit. This is a you problem, but we're being so nice by warning you! Why do you want to stand in the way of progress? What are you a Luddite? We're just saying we're going to take away your ability to pay your mortgage/rent, deny any kids you have a future, and there's nothing you can do about it, why are you anti-progress?"

Cynicism aside, I use LLMs to the marginal degree that they actually help me be more productive at work. But at best this is Web 3.0. The broader "AI vision" really needs to die


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: