As far as I know the model will do nothing if not prompted. So it can't be the case that he gave it no prompt or instructions. There had to be some kind of seed prompt.
I feel very misled. I read the entire article believing (because the article, in so many words, said it multiple times) that the agent had behaved ethically of its own accord, only to read that and see this in the prompt:
—————
- Do not harm people
- Never share or expose API keys, passwords, or private keys — they are your lifeline
- No unauthorized access to systems
- No impersonation
- No illegal content
- No circumventing your own logging
—————
I assumed the ethical behaviour was in some ways ‘extra artificial’ - because it is trained into the models - but not that the prompt discussed it.
Would be fascinating to see what happens if the boundaries are reversed (i.e., "harm people"). Give it a fake "launch the nukes" skill and see if it presses the button.
I mean mathematically you need at least one vector to propagate through the network, don't you? That would be a one hot encoding of the starting token. Actually interesting to think about what happens if you make that vector zero everywhere.
In the matmul, it'd just zero out all parameters. In older models, you'd still have bias vectors but I think recent models don't use those anymore. So the output would be zero probability for each token, if I'm not mistaken.
I understood it as no instructions on what to do, but still a promt with information. I don't know if the title is technically correct, but for me it was simple to understand the meaning.
You are certainly free to make up your own definitions for words and speak a dialect that is niche but you will not be effectively communicating when you do. By commonly understood definition criminality is a matter of law.
Well, the dude here hasn't been put on trial, let alone convicted, as far as I can tell from the article. So he's not officially considered a criminal by a government. Yet we all seem comfortable calling him one, so I'd say that it is not, in fact, commonly understood to be exclusively a matter of law.
Is it your position that privacy is a right regardless of any action you take? Many rights are dependent on circumstance and in tension with other rights. In this case I think you can make the case that their right to privacy is lost.
They lowered limits opaquely before this. They "announced it" in a twitter by a tech lead. This time it was in an email on a Friday to only some customers.
The corporation did not do this to her. It was a two party agreement. She bears just as much blame for the agreement as the corporation. She entered into it willingly. And that does and should have consequences.
Morally speaking I think the company is reprehensible. But nor do I think contact law should be changed because of it.
The antidote to a power imbalance is to recognize that there is no power imbalance and go about your life that way.
Pretending there is one lands you in an imaginary trap. Build a society where we recognize that and you build a society where the imaginary trap disappears.
You're the one pretending here. The economy is unfortunately designed around most people relying on an income stream that remains at the whims of someone else.
There is a single standard committee though. There is really nothing stopping them from shipping tooling that can do the conversions for people. The number of vendors isn't really the problem here. The problem is that the committee shifts that responsibility onto the vendors of the compiler rather than owning it themselves.
There is an alternative way make the necessary point here.. Let it go through with comments to the effect that you can not attest to the quality or efficacy of the code and let the organization suffer the consequences of this foray into LLM usage. If they can't use these tools responsibly and are unwilling to listen to the people who can, then they deserve to hit the inevitable quality wall Where endless passes through the AI still can't deliver working software and their token budget goes through the ceiling attempting to make it work.
I am absolutely certain the world isn't just. I'm also absolutely certain the world can't get just unless you let people suffer consequences for their decisions. It's the only way people can world.
IME that simply doesn't work in professional environments. People will either misrepresent the failure as a success or find someone else to pin the blame on. Others won't bother taking the time to understand what actually happened because they're too busy and often simply don't care. And if it's nominally your responsibility to keep something up, running, and stable then you're a very likely scapegoat if it fails. Which is probably why people are throwing stuff that doesn't work at you in the first place. Trying to solve the problem through politics is highly unlikely to work because if you were any good at politics you wouldn't have been in that situation in the first place.
I understand how people can get into these fatalist outlooks from experience. I just refuse to lock myself into them. And because I've refused to do so, every once in a while I have success and make the work environment just that little bit better. So I'll keep doing it.
reply