I think this only applies to a rather narrow set of ideas.
I'm not really interested in pursuing ideas that stop being good if somebody gets there first. If I bothered to design it its because I wanted it to exist and if somebody makes it exist then I'm happy because then I get to use it.
So what kind of things does this apply to? Likely, it's zero sum games, schemes to control other people, ways to be the first to create a new kind of artificial scarcity, opportunities to make a buck by ruining something that has been so far overlooked by other grifters. In other words: bad ideas.
If AI becomes a threat to those who habitually dwell in such spaces, great, screw em.
In the meantime, the rest of us can build things that we would be happy to be users of, safe in the knowledge that if somebody beats us to it, we'll happily be users of that thing too.
I wonder what the PGP signing concept does to thwart people who want to profit and don't care about the public good. It seems like anyone who attends a signing party can sell their key to the highest bidder, leading to bots and spammers all over again.
In the flat trust model we currently use most places, it's on each person to block each spammer, bot, etc. The cost of creating a new bot account is low so it's cheap to make them come back.
On a web of trust, if you have a negative interaction with a bot, you revoke trust in one of the humans in the chain of trust that caused you to come in contact with that bot. You've now effectively blocked all bots they've ever made or ever will make... At least until they recycle their identity and come to another key signing party.
Once you have the web in place though, a series of "this key belongs to a human" attestations, then you can layer metadata on top of it like "this human is a skilled biologist" or "this human is a security expert". So if you use those attestations to determine what content your exposed to then a malicious human doesn't merely need to show up at a key signing party to bootstrap a new identity, they also have to rebuild their reputation to a point where you or somebody you trust becomes interested in their content again.
Nothing can be done to prevent bad people from burning their identities for profit, but we can collectively make it not economical to do so by practicing some trust hygiene.
Key signing establishes a graph upon which more effective trust management becomes possible. It on its own is likely insufficient.
Battle hardened tools for this have existed for decades, we don't need new ones. Just run claude as a user without access to those directories, that way the containment is inherited by subprocesses.
You can do that, but you need root to set it up each time, and it's not super convenient--you need to decide in advance which user account you are going to work under, and you may end up with files you can read from your regular account. Think of jai strict mode as a slightly easier to use and more secure version of what you described. Using id-mapped mounts enables you and the unprivileged user account both to access the same directory with the same credentials, but you didn't need to decide in advance which directories you wanted to expose. Also, things like disabling setuid and using pid namespaces provide an additional measure of isolation beyond what you get from another account.
You're not wrong, but this will require file perms (like managing groups) and things, and new files created will by default be owned by the claude user instead of your regular user. I tried this early on and quickly decided it wasn't worth it (to me). Other mileage may vary of course.
True. I just maintain separate /home/claude/src/proj and /home/me/src/proj dirs so the human workspace and the robot workspaces stay separate. We then use git to collaborate.
Agreed, as long as it's a catastrophe that the bettors can't cause, but for which advance warning can mitigate harms.
For instance, I'm in favor of bets that a certain astroid will strike the earth at a certain time and place. A signal from the prediction markets might cause somebody to evacuate in a scenario where they'd otherwise cry "fake news."
Let's not bet on whether the water will remain drinkable, because the last thing we need is for somebody to have an incentive to poison it.
> For instance, I'm in favor of bets that a certain astroid will strike the earth at a certain time and place. A signal from the prediction markets might cause somebody to evacuate in a scenario where they'd otherwise cry "fake news."
I understand the point you're making, but in this case, you're still incentivizing someone somewhere to not attempt to the best of their ability to intervene in that astroid. Bets that truly can't cause any change in behavior that might affect the outcome are a mostly theoretical category, in my opinion.
If the bet can't cause any change in behavior then the whole thing is useless. The whole point is to do some good with it. The constraint is that the bettor can't alter the outcome.
Another one would be discovering malware in a PR and betting loudly enough that it won't get merged. The bet is how you make your certainty rise above the bot noise and attract extra attention on the maintainers' part.
Granted that's also theoretical, but it's worth theorising about how we'll get things done in a world where the only way to be heard is to put your money where your mouth is.
> Another one would be discovering malware in a PR and betting loudly enough that it won't get merged. The bet is how you make your certainty rise above the bot noise and attract extra attention on the maintainers' part.
This example seems weaker than the asteroid example. Consider that if you're betting "loudly enough" it won't get merged, the less likely option becomes the malware actually being merged. Now you have a repo maintainer who can bet that it'll get merged (probably a more lucrative bet since it's less likely), and merge it to make money from that bet.
1. work on a project on host_foo in /home/user/src/myproject
2. clone it on host_bar in /home/user/src/myproject
If you set filter_mode = "directory", you can recall project specific commands from host_foo for use on host_bar even though you're working on different machines and the search space won't be cluttered with project specific commands for other projects.
Sure but there's somebody somewhere who had a relevant critical thought and the LLM can find it and adapt it to your case. That's good enough much of the time.
> vote with your dollars by not throwing them at big tech companies
Abstaining is not voting. If you want to vote with your dollar, spend it actively undermining big tech companies. Get out there and blind some cameras or something.
Fair if you’re already not giving them money. But if you manage a sizable chunk of cloud spend at AWS, GCP, Azure etc, you can send a meaningful signal by taking away that revenue and shifting it to a company that’s not aiming for neo-feudalism.
Maybe there needs to be some nonprofit watchdog which helps identify those cases in their early stages and helps bootstrap open forks. I'd fund to a sort of open capture protection savings account if I believed it would help ensure continuity of support from the things I rely on.
I'm not really interested in pursuing ideas that stop being good if somebody gets there first. If I bothered to design it its because I wanted it to exist and if somebody makes it exist then I'm happy because then I get to use it.
So what kind of things does this apply to? Likely, it's zero sum games, schemes to control other people, ways to be the first to create a new kind of artificial scarcity, opportunities to make a buck by ruining something that has been so far overlooked by other grifters. In other words: bad ideas.
If AI becomes a threat to those who habitually dwell in such spaces, great, screw em.
In the meantime, the rest of us can build things that we would be happy to be users of, safe in the knowledge that if somebody beats us to it, we'll happily be users of that thing too.
reply