It has made my job an awful slog, and my personal projects move faster.
At work, the devs up the chain now do everything with AI – not just coding – then task me with cleaning it up. It is painful and time consuming, the code base is a mess. In one case I had to merge a feature from one team into the main code base, but the feature was AI coded so it did not obey the API design of the main project. It also included a ton of stuff you don’t need in the first pass - a ton of error checking and hand-rolled parsing, etc, that I had to spend over a week unrolling so that I could trim it down and redesign it to work in the main codebase. It was a slog, and it also made me look bad because it took me forever compared to the team who originally churned it out almost instantly. AI tools are not good at this kind of design deconflicting task, so while it’s easy to get the initial concept out the gate almost instantly, you can’t just magically fit it into the bigger codebase without facing the technical debt you’ve generated.
In my personal projects, I get to experience a bit of the fun I think others are having. You can very quickly build out new features, explore new ideas, etc. You have to be thoughtful about the design because the codebase can get messy and hard to build on. Often I design the APIs and then have Claude critique them and implement them.
I think the future is bleak for people in my spot professionally – not junior, but also not leading the team. I think the middle will be hollowed out and replaced with principals who set direction, coordinate, and execute. A privileged few will be hired and developed to become leaders eventually (or strike gold with their own projects), but everyone in between is in trouble.
If you dont take a stand and refuse to clean their mess, aren't you part of the problem? No self respecting proponent of AI enabled development should suggest that the engineers generating the code are still not personally responsible for its quality.
Ultimately that's only an option if you can sustain the impact to your career (not getting promoted, or getting fired). My org (publicly traded, household name, <5k employees) is all-in on AI with the goal of having 100% of our code AI generated within the next year. We have all the same successes and failures as everyone else, there's nothing special about our case, but our technical leadership is fundamentally convinced that this is both viable and necessary, and will not be told otherwise.
People who disagree at all levels of seniority have been made to leave the organization.
Practically speaking, there's no sexy pitch you can make about doing quality grunt work. I've made that mistake virtually every time I've joined a company: I make performance improvements, I stabilize CI, I improve code readability, remove compiler warnings, you name it: but if you're not shipping features, if you're not driving the income needle, you have a much more difficult time framing your value to a non-engineering audience, who ultimately sign the paychecks.
Obviously this varies wildly by organization, but it's been true everywhere I've worked to varying degrees. Some companies (and bosses) are more self-aware than others, which can help for framing the conversation (and retaining one's sanity), but at the end of the day if I'm making a stand about how bad AI quality is, but my AI-using coworker has shipped six medium sized features, I'm not winning that argument.
It doesn't help that I think non-engineers view code quality as a technical boogeyman and an internal issue to their engineering divisions. Our technical leadership's attitude towards our incidents has been "just write better code," which... Well. I don't need to explain the ridiculousness of that statement in this forum, but it undermines most people's criticism of AI. Sure, it writes crap code and misses business requirements; but in the eyes of my product team? That's just dealing with engineers in general. It's not like they can tell the difference.
Hi thanks for this brilliant feature. It will really improve the product. However it needs a little bit more work before we can merge it into our main product.
1) The new feature does not follow the existing API guidelines found here: see examples an and b.
2) The new feature does not use our existing input validation and security checking code, see example.
Once the following points have been addressed we will be happy to integrate it.
All the best.
The ball is now in their court and the feature should come back better
This is a politics problem. Engineers were sending each other crap long before AI.
Engineers also wrote good code before AI. We don't get to pretend that the speed increase of AI only increases the output of quality code - it also allows engineers to send much more crap!
..so they copy/paste your message into Claude and send you back a +2000, -1500 version 3 minutes later. And now you get to go hunting for issues again.
In the past I’ve hopped on a call with them and where I’ve asked them to show me it running. When it falls over I say here are the things the system should do, send me a video of the new system doing all of them.
The embarrassment usually shames them into actually checking that the code works.
If it doesn’t then you might have to go to the senior stakeholder and quietly demonstrate that they said it works, but it does not actually work.
You don’t want to get into a situation where “integrate” means write the feature while others get credit.
There is an alternative way make the necessary point here.. Let it go through with comments to the effect that you can not attest to the quality or efficacy of the code and let the organization suffer the consequences of this foray into LLM usage. If they can't use these tools responsibly and are unwilling to listen to the people who can, then they deserve to hit the inevitable quality wall Where endless passes through the AI still can't deliver working software and their token budget goes through the ceiling attempting to make it work.
I am absolutely certain the world isn't just. I'm also absolutely certain the world can't get just unless you let people suffer consequences for their decisions. It's the only way people can world.
IME that simply doesn't work in professional environments. People will either misrepresent the failure as a success or find someone else to pin the blame on. Others won't bother taking the time to understand what actually happened because they're too busy and often simply don't care. And if it's nominally your responsibility to keep something up, running, and stable then you're a very likely scapegoat if it fails. Which is probably why people are throwing stuff that doesn't work at you in the first place. Trying to solve the problem through politics is highly unlikely to work because if you were any good at politics you wouldn't have been in that situation in the first place.
I understand how people can get into these fatalist outlooks from experience. I just refuse to lock myself into them. And because I've refused to do so, every once in a while I have success and make the work environment just that little bit better. So I'll keep doing it.
> My org [...] is all-in on AI with the goal of having 100% of our code AI generated within the next year.
> People who disagree at all levels of seniority have been made to leave the organization.
So either they're right (100% AI-generated code soon) and you'll be out of a job or they'll be wrong, but by then the smart people will have been gone for a while. Do you see a third future where next year you'll still have a job and the company will still have a future?
"100% AI-generated code soon" doesn't mean no humans, just that the code itself is generated by AI. Generating code is a relatively small part of software engineering. And if AI can do the whole job, then white collar work will largely be gone.
I agree, but it seems like if we can tell the AI "follow these requirements and use this architecture to make these features", we're a small step away from letting the AI choose the requirements, the architecture and the features. And even if it's not 100% autonomous, I don't see how companies will still need the same number of employees. If you're the lead $role, you'll likely stay, but what would be the use of anyone else?
> ... I make performance improvements, I stabilize CI, I improve code readability, remove compiler warnings, you name it ...
These are exactly the kind of tasks that I ask an AI tool to perform.
Claude, Codex, et al are terrible at innovation. What they are good at is regurgitating patterns they've seen before, which often mean refactoring something into a more stable/common format. You can paste compiler warnings and errors into an agentic tool's input box and have it fix them for you, with a good chance for success.
I feel for your position within your org, but these tools are definitely shaking things up. Some tasks will be given over entirely to agentic tools.
> These are exactly the kind of tasks that I ask an AI tool to perform.
Very reasonable nowadays, but those were things I was doing back in 2018 as a junior engineer.
> Some tasks will be given over entirely to agentic tools.
Absolutely, and I've found tremendous value in using agents to clean up old techdebt with oneline prompts. They run off, make the changes, modify tests, then put up a PR. It's brilliant and has fully reshaped my approach... but in a lot of ways expectations on my efficiency are much worse now because leadership thinks I can rewrite our techstack to another language over a weekend. It almost doesn't matter that I can pass all this tidying off onto an LLM because I'm expected to have 3x the output that I did a year ago.
Unfortunately not many companies seem to require engineers to cycle between "feature" and "maintainability" work - hence those looking for the low-hanging fruits and know how to virtue signal seem to build their career on "features" while engineers passionate about correct solutions are left to pay for it while also labelled as "inefficient" by management. It's all a clown show, especially now with vibe-coding - no wonder we have big companies having had multiple incidents since vibing started taking off.
> Shipping “quality only” work for a long time can be stressful for your colleagues and the product teams.
I buried the lede a bit, but my frustration has been feeling like _nobody_ on my team prioritizes quality and instead optimizes for feature velocity, which then leaves some poor sod (me) to pick up the pieces to keep everything ticking over... but then I'm not shipping features.
At the end of the day if my value system is a mismatch from my employer's that's going to be a problem for me, it just baffles me that I keep ending up in what feels like an unsustainable situation that nobody else blinks at.
That's a different situation than the one I had in mind. I was assuming a sane culture that balances shipping features and quality work. What you're describing sounds like a serious value function mismatch.
Employees, especially ones as well leveraged and overpaid as software engineers, are not victims. They can leave. They _should_ leave. Great engineers are still able to bet better paying jobs all the time.
> Great engineers are still able to bet better paying jobs all the time
I know a lot of people who tried playing this game frequently during COVID, then found themselves stuck in a bad place when the 0% money ran out and companies weren’t eager in hiring someone whose resume had a dozen jobs in the past 6 years.
Came here to say this. The right solution to this is still the same as it always was - teach the juniors what good code looks like, and how to write it. Over time, they will learn to clean up the LLM’s messes on their own, improving both jobs.
You can should speak up when tasks are poorly defined, underestimated, or miscommunicated.
Try to flat out “refuse” assigned work and you’ll be swept away in the next round of layoffs, replaced by someone who knows how to communicate and behave diplomatically.
> It was a slog, and it also made me look bad because it took me forever compared to the team who originally churned it out almost instantly.
The hell you are playing hero for? Delegate the choice to manager: ruin the codebase or allocate two weeks for clean-up - their choice. If the magical AI team claim they can do integration faster - let them.
IME one thing that makes this choice a very difficult one is oncall responsibilities. The thing that incentivizes code owners to keep their house in order is that their oncall experience will be a lot better. And you're the only one who is incentivized to think this way. Management certainly doesn't care. So by delegating the choice to management you're signing up for a whole bunch of extra work in the form of sleepless oncall shifts.
If someone is making the kind of mistakes that cause oncall issues to increase, put that person on call. It doesn't matter if they can't do anything, call them every time they cause someone else to be paged.
IME too many don't care about on call unless they are personally affected.
> If someone is making the kind of mistakes that cause oncall issues to increase
the problem is that identifying the root cause can take a lot of time, and often the "mistakes" aren't clearly sourced down to an individual.
So someone oncall just takes the hit (ala, waking up at 3am and having to do work). That someone may or may not be the original progenitor of said mistake(s).
Framed less blamefully, that's basically the central thesis of "devops". That is the notion that owning your code in production is a good idea because then you're directly incentivized to make it good. It shouldn't be a punishment, just standard practice that if you write code you're responsible for it in production.
If they're handing you broken code call them out on it. Say this doesn't do what it says it does, did you want me to create a story for redoing all this work?
I've heard of human engineers who are like that. "10x", but it doesn't actually work with the environment it needs to work in. But they sure got it to "feature complete" fast. The problem is, that's a long way from "actually done".
Thst is definitely one tell, the hand rolled input parsing or error handling that people would never have done at their own discretion. The bigger issue is that we already do the error checking and parsing at the different points of abstraction where it makes the most sense. So it's bespoke, and redundant.
That is on the people using the AI and not cleaning up/thinking about it at all.
> At work, the devs up the chain now do everything with AI – not just coding – then task me with cleaning it up.
This has to be the most thankless job for the near future. It's hard and you get about as much credit as the worker who cleans up the job site after the contractors are done, even though you're actually fixing structural defects.
And god forbid you introduce a regression bug cleaning up some horrible redundant spaghetti code.
Near future being the key term here imo. The entire task I mentioned was not an engineering problem, but a communication issue. The two project owners could have just talked to each other about the design, then coded it correctly in the first pass, obviating the need for the code janitor. Once orgs adapt to this new workflow, they’ll replace the code janitors with much cheaper Claude credits.
We’ve had this too and made a change to our code review guidelines to mention rejection if code is clearly just ai slop. We’ve let like four contractors go so far over it. Like ya they get work done fast but then when it comes to making it production ready they’re completely incapable. Last time we just merged it anyways to hit a budget it set everyone back and we’re still cleaning up the mess.
> In our shop, we have hundreds of agents working on various problems at any given time. Most of the code gets discarded. What we accept to merge are the good parts.
What you’ve described is an incredibly expensive and inefficient genetic algorithm with a human review as the fitness function. It’s not the flex you might think it is.
I look forward to the day we pull our heads out of the sand and stop excusing blatant corruption. It takes a naive view of the world to assume the Secretary of Commerce has access to the same limited information as you or I.
Let’s call all of this what it is: parasites leveraging their insider positions for profit. The ruling class is ripping the copper out of our walls and selling it for scrap while we all choose to look the other way.
While some of it is boosting the abnormal behaviors of people suffering from mental illness, I think you’re making a false equivalency. Mental illness is not required to be an asshole. In fact, most Twitter assholes are probably not mentally ill. They lack ethics, they crave attention, they don’t care about the consequences of their actions. They may as well just be a random teenager, an ignorant and inconsiderate adult, etc., with no mental illness but also no scruples. Don’t discount the banality of evil.
In an adult (excluding the random teenager here), a lack of ethics, craving attention, lack of concern about consequences are actual symptoms of underlying mental health issues.
I'd argue a lot of this is rooted in a lack of self esteem, which is halfway to a mental health issue but not quite there (yet). The attention-seeking itself is the mental health issue. But it's kinda splitting hairs, these people are not fully mentally healthy either way.
So why are we still telling ourselves this process is used to assess suitability for a job? Is it not just a hazing process? Maybe tech sucks because people who take the abuse or game the system outnumber the results oriented
So then the corps find a way to fire you for something other than AI displacement, replace you with AI anyway, and you’re on your own. Basically identical to firing someone in a clever way that avoids having to pay unemployment, which already happens quite frequently.
I don’t understand why taxation is so off limits to this crowd. We seem to live in a death cult where avoiding a slight inconvenience to 100 people is more important than providing a decent standard of living for the other 345 million people. You can invent whatever clever little solution you want in the meantime but eventually the chickens will come home to roost.
>I don’t understand why taxation is so off limits to this crowd.
HN is filled with lots of temporarily depressed millionaires and many actual millionaires too. These are the ones that have bought into zero tax, government is all bad, free market capitalism for me Rand'ian ideas without any systematic thought on how their ideas would work out in practice.
Add to this that a lot of media, and pretty much everything on TV, is owned by billionaires these days that use the news as their platform to propagandize on why they should own more of everything and become richer, so it's not exactly surprising we're at this place.
Goes both ways. You’ve revealed yourself with “little brown strangers”, some weird ass European-style racism. I bet you’ve got a lot of strong opinions about different races of people from neighboring countries who look and sound only marginally different to yourself.
Acidified oceans, poisonous air, and frequent multibillion dollar extreme weather events are a small price to pay for a purely hypothetical $2,400 off my next car, which I am forced to own because the same companies that lobby against climate change regulations are the ones that tore up all the public transit infrastructure that would otherwise allow me not to own a car at all. Americans love getting fucked by our corporate overlords, we can’t get enough of it, it’s our way of life.
The US seems culturally ill-equipped to deal with this reality. We have encouraged several generations of people to channel all of their talents into maximizing their individual income, regardless of externalities or impact to their community. There is low trust and minimal social reward for giving back. We idolize the loudest, most ignorant voices only because they are wealthy and famous. In my own work with the next generation of tech workers, this seems obvious. The younger generations see it as a zero sum game. You only win by making as much money as possible, and the ends justify the means.
I think they should. Let’s kick off some meaningful economic growth in Europe and provide a counter to the increasingly hegemonic, anti-human US tech oligarchs that have reaped all of the financial rewards of algorithmic radicalization and surveillance capitalism for the past 20 or so years. Maybe Europe can imagine something better.
I don't know, you might be underestimating how much damage the orange in charge is really doing to the interests of the US. Change is slow, and the subtle things set in motion are always perceived too late. A simple example would be a small county in germany saving 5+ million a year thanks to moving away from microsoft. Add that to the budget of the many (largely european) opensource projects out there , and you can see things can shift, slowly, but rapidly once noticed.
Europe needs to roll back all of the socialism if it wants to compete with the US and China. European tech is never going to keep pace if the people who build it only work 35 hours a week and take a year of paternity leave every time they have a kid.
With decades of education cuts, top STEM researchers leaving the country, and immigration coming to a halt, I think you overestimate the future competitive position of the US.
No we do not need to roll back on our humanity. The US population however really need to wake up and start unionize and vote for politicians that are not big orange incompetent babies.
How do they compete for actual tech then? Like Airbus.
- 35h a week, doesn’t prevent engineers from working more legally (most do)
- with the age of AI code velocity is no more about time spent, but fresh brain
- And much much more important, it is significantly more efficient to have an employee 10 year in one place than 2 years in 5 places. What could explain higher US turnover than europe, you think?
Here’s the difference between US and Europe: in US tech, productivity gains due to AI will lead to lower employment and higher expectations for the remaining employees. Salaries will remain the same and any increases in profit will of course go straight to the capital owning class. It will continue to be great for a vanishingly small number of people. On the contrary, Europe’s “socialism” makes them well-prepared to deliver the same level of productivity with AI using more people working fewer hours. And their “socialist” attitude toward where that value should go will result in an increased standard of living for everyone. You know, like the AI utopia we’ve all been promised.
At work, the devs up the chain now do everything with AI – not just coding – then task me with cleaning it up. It is painful and time consuming, the code base is a mess. In one case I had to merge a feature from one team into the main code base, but the feature was AI coded so it did not obey the API design of the main project. It also included a ton of stuff you don’t need in the first pass - a ton of error checking and hand-rolled parsing, etc, that I had to spend over a week unrolling so that I could trim it down and redesign it to work in the main codebase. It was a slog, and it also made me look bad because it took me forever compared to the team who originally churned it out almost instantly. AI tools are not good at this kind of design deconflicting task, so while it’s easy to get the initial concept out the gate almost instantly, you can’t just magically fit it into the bigger codebase without facing the technical debt you’ve generated.
In my personal projects, I get to experience a bit of the fun I think others are having. You can very quickly build out new features, explore new ideas, etc. You have to be thoughtful about the design because the codebase can get messy and hard to build on. Often I design the APIs and then have Claude critique them and implement them.
I think the future is bleak for people in my spot professionally – not junior, but also not leading the team. I think the middle will be hollowed out and replaced with principals who set direction, coordinate, and execute. A privileged few will be hired and developed to become leaders eventually (or strike gold with their own projects), but everyone in between is in trouble.
reply