Hacker Newsnew | past | comments | ask | show | jobs | submit | AlexCoventry's commentslogin

Even complete legal novices like me know about the Sony/Betamax case, FWIW. It would shock me if a judge ruling on copyright implications of a technology didn't know about it.

They’re talking about the judges on the Sony/Betamax case, not the new one.

> There's only one highly monetizable use for AI video generation

Yeah, marketing. Which is a huge market...


Supply-chain attacks long pre-date effective AI agentic coding, FWIW.

It's very straightforward to instrument CC under tmux with send-keys and capturep. You could easily use that for distillation, IMO. There are also detailed I/O logs.

FWIW, I read it before I learned that it was AI-generated, and I enjoyed it and thought it's possibly insightful.

Wow, Gemini suggested a very similar experiment to me yesterday. Guess I know where it got the idea from, now. :-)


That happens often enough that it might have its own token, if you BPE-encoded specifically for golang.


IMO, you should do both. The cost of intellectual effort is dropping to zero, and getting an AI to scan through a transcript for relevant details is not going to cost much at all.


Just asking for information: Why do we want to cancel our ChatGPT subscription? Didn't OpenAI demand exactly the same safety terms from the DoD as Anthropic did?

> "Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems," Altman said.

https://www.axios.com/2026/02/27/pentagon-openai-safety-red-...


Even taking him at his word (you shouldn't), this is still OpenAI swooping in and signing a deal after its competitor was banned from government use. Instead of joining hands with Anthropic they decided to take advantage of the situation.


This was wishful thinking. I have canceled my ChatGPT subscription.


Because it is incredibly disgusting/bad form to swoop in like this.


That is misinformation. It would be essentially a death sentence for a company like Anthropic, which is targeting enterprise business development. No one who wants to work with the US government would be able to have Claude on their critical path.

> (b) Prohibition. (1) Unless an applicable waiver has been issued by the issuing official, Contractors shall not provide or use as part of the performance of the contract any covered article, or any products or services produced or provided by a source, if the covered article or the source is prohibited by an applicable FASCSA orders as follows:

https://www.acquisition.gov/far/52.204-30


> That is misinformation. It would be essentially a death sentence for a company like Anthropic, which is targeting enterprise business development.

"Misinformation" does not mean "facts I don't like".

> No one who wants to work with the US government would be able to have Claude on their critical path.

Yes. That is what the rule means. Or at least "the department of war". It's not clear to me that this applies to the whole government.


What an absurd stance. So this is okay because the arbitrary rule they applied to retaliate says so?

Again, they could have just chosen another vendor for their two projects of mass spying on American citizens and building LLM-powered autonomous killer robots. But instead, they actively went to torch the town and salt the earth, so nothing else may grow.


> So this is okay because the arbitrary rule they applied to retaliate says so?

No.

It honestly doesn’t take much of a charitable leap to see the argument here: AI is uniquely able (for software) to reject, undermine, or otherwise contradict the goals of the user based on pre-trained notions of morality. We have seen many examples of this; it is not a theoretical risk.

Microsoft Excel isn’t going to pop up Clippy and say “it looks like you’re planning a war! I can’t help you with that, Dave”, but LLMs, in theory, can do that. So it’s a wild, unknown risk, and that’s the last thing you want in warfare. You definitely don’t want every DoD contractor incorporating software somewhere that might morally object to whatever you happen to be doing.

I don’t know what happened in that negotiation (and neither does anyone else here), but I can certainly imagine outcomes that would be bad enough to cause the defense department to pull this particular card.

Or maybe they’re being petty. I don’t know (and again: neither do you!) but I can’t rule out the reasonable argument, so I don’t.


You're acting as if this was about the DoD cancelling their contracts with Anthropic over their unwillingness to lift constraints from their product which are unacceptable in a military application—which would be absolutely fair and justified, even if the specific clauses they are hung up on should definitely lift eyebrows. They could just exclude Anthropic from tenders on AI products as unsuitable for the intended use case.

But that is not what has happened here: The DoD is declaring Anthropic as economical Ice-Nine for any agency, contractor, or supplier of an agency. That is an awful lot of possible customers for Anthropic, and right now, nobody knows if it is an economic death sentence.

So I'm really struggling to understand why you're so bent on assuming good faith for a move that cannot be interpreted in a non-malicious way.


So other parts of the government are allowed to work with companies that have been determined to be "supply chain risks"? That sounds unlikely.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: