I think it's often genuine excitement to share a thing - without quite processing that anybody with the same idea can now build it (for simple- to mid-complexity projects).
The novelty of "new thing! That would have been incredibly hard a decade ago!" hasn't worn off yet.
This isn't the first time something like this has happened.
I would imagine that people had similar thoughts about the first photographs, when previously the only way to capture an image of something was via painting or woodcutting.
When movies first came out they would film random stuff because it was cool to see a train moving directly at you. The novelty didn't wear off for years.
There was something someone said in a comment here, years and years ago (pre AI), which has stuck with me.
Paraphrased, "There's basically no business in the Western world that wouldn't come out ahead with a competent software engineer working for $15 an hour".
Once agents, or now claws I guess, get another year of development under them they will be everywhere. People will have the novelty of "make me a website. Make it look like this. Make it so the customer gets notifications based on X Y and Z. Use my security cam footage to track the customer's object to give them status updates." And so on.
AI may or may not push the frontier of knowledge, TBD, but what it will absolutely do is pull up the baseline floor for everybody to a higher level of technical implementation.
And the explosion in software produced with AI by lay-people will mean that those with offensive security skills, who can crack and exploit software systems, will have incredible power over others.
I think that when a software system is used by more people and has more eyes on it, it's more likely to have its security flaws be found and fixed. Then all the users will benefit from the fix.
The more that software is fragmented into bespoke applications used by small numbers of people, the less people benefit from security network effects.
I believe the security vulnerability issues will be addressed with companies using cloud based vibe-code platform or a ai security auditor agent that runs through the code base and flags security issues.
Sure it is. AI software development is here. It's not good enough for everything, but it's good enough for a majority of the changes made by most software engineers.
That's now. Right now, the tooling exists so that for >80% of software devs, 80% of the code they produce could be created by AI rather than by hand.
You can always find some person saying that it'll destroy all jobs in a year, or make us all rich in a year, or whatever, but your cynicism blinds you to the actual advances being made. There is an endless supply of new goalpost positions, they will never all be met, and an endless supply of chartalans claiming unrealistic futures. Don't confuse that with "and therefore results do not exist".
No, it isn't. There is a gigantic chasm of difference between "80% of code they produce could be created by AI" and "80% of commits they produce could be created by AI".
Mixing the two up is how we get a massive company like Microsoft to continually produce such atrocious software updates that destroy hardware or cause BSODs for their flagship Operating System.
That's not replacing software development. That's dysfunction masquerading as capability.
And none of what I said is goalpost moving. They are the goalposts constantly made by the AI industry and their hype-men. The very premise of replacing a significant amount of human labor underlies the exorbitant valuation AI has been given in the market.
It appears that your understanding of AI code generation reflects the state of 1-2 years ago. In which case of course it seems like what people are describing as reality, feels 1-2 years away.
> There is a gigantic chasm of difference between "80% of code they produce could be created by AI" and "80% of commits they produce could be created by AI".
This is exactly the goalpost moving I am talking about. I said 80% of code could be AI-written, you agreed, and followed up with "oh but it doesn't matter because now we're measuring by % of commits".
> That's now. Right now, the tooling exists so that for >80% of software devs, 80% of the code they produce could be created by AI rather than by hand.
Technically 100% of the code they could produce could be created by a ton of very specific AI prompts. At that level of control it would be slower than typing the code out though.
Just throwing out random numbers like this is complete nonsense since there's about a million factors which determine the effectiveness of an LLM at generating code for a specific use case. And it also depends on what you consider producing by hand versus LLM output. Etc.
Today I fed to Opus 4.6 five screenshots with annotations from the client and told it to implement the changes. Then told it to generate real specs, which it did. I never even looked at the screenshots, I just checked and tested against the generated specs. Client was happy.
I have a similar feeling to people who upload their AI art to sites like danbooru. Like I guess I can understand making it for yourself but why do you think others want to see it
xkcd turned stick figure drawings into an art form. sometimes it is not about how something was created, but about the story being told.
some people build apps to solve a problem. why should they not share how they solved that problem?
i have written a blog post about a one line command that solves an interesting problem for me. for any experienced sysadmin that's just like a finger painting.
do we really need to argue if i should have written that post or not?