Assemblies will come but not anytime soon. I did some research and tests and it is going to be a challenge. You can still add multiple parts in one scene but no actual joints implementation.
> For instance it does not make sense to have an MCP to use git.
What if you don’t want the AI to have any write access for a tool? I think the ability to choose what parts of the tool you expose is the biggest benefit of MCP.
As opposed to a READ_ONLY_TOOL_SKILL.md that states “it’s important that you must not use any edit API’s…”
Just as easy to write a wrapper to the tool you want to restrict. You ban the restricted tool outright, and the skill instructs on usage of the wrapper.
Safer than just giving an instruction to use the tool a specific way.
Anyone who's ever `DROP TABLE`d on a production rather than test database has encountered the same problem in meatspace.
In this context, the MCP interface acts as a privilege-limiting proxy between the actor (LLM/agent) and the tool, and it's little different from the standard best practice of always using accounts (and API keys) with the minimum set of necessary privileges.
It might be easier in practice to set up an MCP server to do this privilege-limiting than to refactor an API or CLI-tool, but that's more an indictment of the latter than an endorsement of the former.
Is there something else that replaces keybase proofs? I still see a ton of people, including yourself with keybase in their profile, but is anyone still seeing that used?
Nobody uses it because it wasn't supported. I had started off doing images, but their CSP policies prevent an extension from embedding images into the dom. SVG is still game on though. That said, I've seen a bunch of people try to create their own ascii art diagrams and wouldn't it be nice to have them just rendered as svg?
Well yeah, business has literally always extracted value from open source software, that’s one of the main benefits of it… (although license violations have been unprecedented with AI)
“Creating value” in open source has never been about capturing value at all, it’s always been about volunteering and giving back, and recognising the unfathomable amount of open-source software that runs the modern world we live in
“Capturing value” is the opposite of this, wall-gardens, proprietary API’s, vendor lock-in, closed-source code… it’s almost antithetical to the idea of open source
> “Creating value” in open source has never been about capturing value at all, it’s always been about volunteering and giving back
I disagree; the GPL has always been transactional. You capture the value in your product by ensuring improvements come back to you. The user "pays" by not being able to close the product off.
> If clean-room re-engineering a MIT code base starting from a GPL one is legit, then AI has just made that the status quo for everything.
I agree; this is what I meant by "the value is being captured by someone else".
GPL provides the author with a specific value - you get back improvements. Using AI to launder that IP so that improvements don't have to be upstreamed is effectively capturing the value.
> so projects are built whether or not they're good ideas
Let’s be honest, this was always the case. The difference now is that nobody cares about the implementation, as all side projects are assumed to be vibecoded.
So when execution is becoming easier, it’s the ideas that matter more…
This is something that I was thinking about today. We're at the point where anyone can vibe code a product that "appears" to work. There's going to be a glut of garbage.
It used to be that getting to that point required a lot of effort. So, in producing something large, there were quality indicators, and you could calibrate your expectations based on this.
Nowadays, you can get the large thing done - meanwhile the internal codebase is a mess and held together with AI duct-tape.
In the past, this codebase wouldn't scale, the devs would quit, the project would stall, and most of the time the things written poorly would die off. Not every time, but most of the time -- or at least until someone wrote the thing better/faster/more efficiently.
How can you differentiate between 10 identical products, 9 of which were vibecoded, and 1 of which wasn't. The one which wasn't might actually recover your backups when it fails. The other 9, whoops, never tested that codepath. Customers won't know until the edge cases happen.
It's the app store affect but magnified and applied to everything. Search for a product, find 200 near-identical apps, all somehow "official" -- 90% of which are scams or low-effort trash.
To play devil's advocate, if you were serious about building a product, whether it was hand-coded or vibe-coded, you would iterate through the work and implement functionalities step-by-step.
But with vibe-coding, you might not give enough thoughts about the product to think of use cases. I think you can still build good software with varying degrees of AI assistance, but it takes the same effort of testing and user feedback to make it great.
There’s also the much more common case of a competitor coming in with a similar product that has a few more features matching the customers’ requirements… which explains the endless product development treadmill that companies find themselves on.
Software doesn’t win by being “finished” it wins by out competing other software
Yeah, if Youtube was "finished" we wouldn't have had Youtube Red, Youtube shorts, Youtube music, etc.
And yes, I am making a good case for mature software with those lovely examples. But clearly they wanted more widgets and they kept engineers who can deliver those widgets. This wasn't some unsustainable thing for Youtube as the top comment argues. And that's how most software businesses work as of now. If you remain complacent, you're slowly dying to competition. Because the demand for more still exists.
reply