Hacker Newsnew | past | comments | ask | show | jobs | submit | inder1's commentslogin

debian's stance actually makes more sense than a ban. you can't detect the origin of a contribution reliably. what you can do is hold the contributor accountable for what they submit. the problem is when people use that accountability structure without the skill to back it up. the asymmetric warfare framing in the comments is right: the cost to submit low-quality PRs is near zero now. but the cost to review them didn't change. the maintainer burden goes up, not down. that's the pattern worth paying attention to: AI compresses the cost of producing output, but the responsibility for output quality doesn't compress with it.


the skills that protect against displacement long-term are exactly what vibe coding erodes. an engineer who built with AI but never developed the instincts to spot its mistakes has a gap they don't know they have. this maintainer problem is a preview: when you can't tell the difference between a PR from someone who understood the code and one from someone who just prompted into it, the verification burden doesn't disappear. it shifts to whoever has enough skill to catch the errors.


The capability vs adoption gap is the real story. Anthropic's data shows LLMs can theoretically handle 94% of computer and math tasks but actual usage is around 33%. Entry-level hiring has slowed most in exposed roles. Not because AI is already doing those jobs, but because companies stopped hiring while they wait and see.


The "K-shaped workforce" framing is real and probably underappreciated. Senior engineers get more out of AI because they can evaluate the output, catch the architectural mistakes, and debug the edge cases. Juniors using AI to write code they can't read aren't building the debugging instinct that makes senior engineers valuable. That gap compounds over 2-3 years. The question isn't whether to use AI. It's whether you're actually understanding what it produces.


The "senior gets better results" dynamic is real and probably the most underappreciated fact in the AI-jobs debate. The question isn't whether AI writes code. It's whether the person steering it has enough context to catch what's wrong. Juniors learning on AI-generated output may end up with surface fluency but no debugging instincts. That gap will matter a lot in 2-3 years.


The failure mode here is predictable. Junior practitioners in any domain are being asked to use AI tools before they've developed the professional judgment to validate the outputs. You can't spot a hallucinated court order if you don't know what real court orders look like. The tool isn't the problem. The training pipeline that skips fundamentals is.


the correction you're describing is real. the timing is unknowable but the direction isn't.

two things tend to protect people in this scenario: domain knowledge that's hard to hand off, and a visible productivity multiplier with AI tools. engineers who can demonstrate 5-10x throughput are the last to go. the ones doing standard work at standard pace are not, regardless of seniority.

the actionable move is to close that gap now, while you still have access to good tooling and time to build the habit. the market can stay irrational longer than your job security can hold, but the productivity gap is something you can actually control.


$730B on roughly $11B ARR is ~66x. Microsoft trades at 12x. The market is pricing OpenAI as infrastructure, not software. That's probably right. The implication is that the productivity gains accrue to whoever controls the infrastructure layer. The workers and companies building on top of it are taking the displacement risk without the upside.


Infrastructure is replacable and a race to the bottom though.

If OpenAI fails, you just change the endpoint URL and API key in your software.


The $2M+ gross profit per person target is the number that actually explains this. That's 4x their pre-covid baseline. Every company that over-hired during ZIRP is running this same math correction. The AI framing helps the stock. The underlying correction was always coming.


The "AI kills software jobs" framing and the "AI is just a tool" framing are both wrong in the same way — they treat it as uniform.

What actually seems to be happening is that AI compresses the value of generic output and amplifies the value of domain judgment. A developer who knows how a specific industry works, what the edge cases are, why the legacy system was built the way it was — that context isn't in the training data. It compounds.

The engineers I've seen struggle most aren't the ones in AI-exposed roles. They're the ones in AI-exposed roles who've optimized for output volume rather than judgment depth. Those two things used to correlate. They no longer do.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: