Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Regardless of the original contract, it's entirely appropriate for a vendor to tell the customer how to use any materials.

Imagine a _leaded_ pipe supplier not being allowed to tell the department of war they shouldn't use leaded pipes for drinking water! It's the job of the vendor to tell the customer appropriate usage.

 help



This is quite literally the norm for things with known dangerous use cases.

Go look at the package on a kitchen knife and it says not to be used as a weapon


Playing devil's advocate: if I did in fact grab one of my kitchen knives to defend myself against a violent intruder into my kitchen, I wouldn't expect to be banned from buying kitchen knives.

I'm not sure this is still a useful analogy, though...


And if you grabbed the knife and went on a violent spree, I'd absolutely expect the knife manufacturer to refuse to sell to you anymore.

The knife manufacturer isn't obligated to sell to you in either case, I'd expect them not to cut ties with you in the self defence scenario. But it is their choice.


The knife manufacturer would be more than happy to continue to sell to you, except for that minor little detail that you're in jail.

Any knife vendor who

1. Found out you used their knives to go murdering

2. Sells knives in a fashion where it's possible for them to prevent you from buying their knives (i.e. direct to consumer sales)

Would almost certainly not "be more than happy to continue to sell to you". Even if we ignore the fact that most people are simply against assisting in murders (which by itself is a sufficient justification in most companies), the bad PR (see the "found out" and "direct to consumer" part) would make you a hugely unprofitable customer.


Meh. Not sure why knife dealers would be assumed to be more moral than firearms dealers. See, e.g. Delana v. CED Sales (Missouri)

> the bad PR (see the "found out" and "direct to consumer" part) would make you a hugely unprofitable customer.

That... Doesn't happen.

Boycotts by people who weren't going to buy your product anyway are immaterial to business. The inevitable lawsuits are costly, but are generally thought of as good publicity, because they keep the business name in the news.


People who buy luxury kitchen knives are exactly the type of people who would choose not to buy a product because it is associated with crime.

People who buy (and make) firearms are... pretty close to the exact opposite.


So now it's "luxury" kitchen knives?

Goalposts moved.


Direct to consumer sales of kitchen knives are entirely luxury products... the goalposts are exactly where they've always been.

Ahhh, direct to consumer.

Where either it's a computer program (website) that knows nothing about you, or cutco.

If you think you wouldn't find a cutco representative to sell to you, you're on some good reality-altering drugs.


sotto voce the knives are a metaphor

Doesn't matter.

There will always be some company willing to sell to even the worst person, in any product category.

The response that companies have to boycotts, and the results of the boycotts themselves, are fractally chaotic at best.

But even most nominally socially-aware companies are reactive, rather than proactive.


Since the knife vendors were metaphors for AI vendors, is the comparison you want to make "AI vendors & weapons manufacturers"? That's the standard we should judge them by?

It's not about the standard we should judge them by, which is equivalent to how we think they should act.

It's about how we think they will act.

Especially when it comes to sales to the US military, I have no expectations about how companies will act.

Hell, just look at how many companies willingly helped China with their Great Firewall.


> Not sure why knife dealers would be assumed to be more moral than firearms dealers

What I mean is that you _did_ judge them by a standard used for weapons manufacturers. How you react to their actions _is_ your judgement.

But perhaps that is the standard we should use. Weapons manufacturing is a well regulated industry after all. Export controls, dual-use technology restrictions, if it has applications for warfare it should be appropriately restricted.


> is that you _did_ judge them by a standard used for weapons manufacturers.

I think any of these companies will attempt to get away with whatever the fuck they can.

That has fuckall to do with your rhetorical question of:

> That's the standard we should judge them by?


If I shoot someone, something that is explicitly warned against in firearm safety materials that come with every purchase of a new firearm, I am no longer allowed to purchase any more firearms.

There are many situations in which you can shoot someone and still be allowed to buy a gun.

Also, in the cases you can't, it's generally the government stopping you, not the gun companies.


That's for a different reason though--you broke the law.

The specific shape of a kitchen knife would make it a particularly poor fighting knife, and knives in general are bad for self defense, due to the potential for it to be turned against the user. So, there is a good argument that such a suggestion is really in the user's best interest rather than a cynical play for the manufacturer to limit liability.

These knife and lead analogies don't map well to the reality of AI. Note: just talking about the analogy itself not the point you are making.

Edit: hell I get downvoted and look where the knife analogy got us. A load of weird replies miles away from anything related to AI or DoD.


I agree. I hoped people would get my point, but instead are arguing about gun laws for some reason?

You should give it longer than an hour before you start complaining about downvotes. Or just let your comment stand on it's own.

Seconded. You can't see all the up and down votes, only the balance at the moment you look, and it's not too uncommon to be negative or even dead and be upped or vouched back to life later.

No it isn't. There are warnings, but once a knife is yours you are free to do whatever you want with it, including reselling it to someone else. The idea of terms of service of using something is not something that typically exists with physical objects that one can own. They can't take your knife away from you because you decided to use it for a medical purpose without purchasing a medical license for the knife.

They also have other vendors.

Claude Opus is just remarkably good at analysis IMO, much better than any competitor I’ve tried. It was remarkably good and complete at helping me with some health issues I’ve had in the past few months. If you were to turn that kind of analytical power in a way to observe the behaviour of American citizens and to change it perhaps, to make them vote a certain way. Or something like - finding terrorists, finding patterns that help you identify undocumented people.


Or how to best direct the power of the military against the US civilian population. They keep trying.

I have used chatgpt 5.2 thinking for health, gemini hallucinates a lot, specially with dna analysis. Never tried using the new claude even though i have access through antigravity. Might give it a try. Do you have any tips on how to approach it for health ‘analytical power’?

I just made a project, added all my exams (they were piling up, me and my psychiatrist had been investigating for a year this to no avail) and started talking to it about my symptoms.

Within a few iterations of this it gave me a simple blood panel, then I did that one and it kept suggesting more simple lab or at home tests and we kept going through them until I was reasonably certain of “something” and now that I have hypothesis I am going to a doctor. I think it’s done a great job. I also kept asking it for simple lifestyle interventions to prevent progression of my issue and it consistent nailed it - one particular interverntion (adding salt to water and drinking it to prevent symptoms) made a huge improvement to my life - I was barely working before that.

I added in some text the instructions box (project master prompt) for it to realise - it’s not medical advice and I am aware of that (prevents excessive guardrails) - add confidence intervals and probability to all diagnostic statements (prevents me + Claude going into rabbit holes so easily, it often has 70-80% certainty of what it’s saying, but it’s clear that it doesn’t use the right language) - that It was talking to an non expert, to use simple language but to go into detail when necessary. I also ask it to stop doing unnecessary constant follow up questions to every answer as that causes me anxiety. I can share the prompt, in fact I might do so later as it might be useful to others.


Here is the prompt and a few notes on operation.

Make sure your first chat is about the exams in the project files. Make sure it reads them all. It has a tendency to read a few and go “is this good”. Ask for a summary and note any absences.

Try using the research and extended thinking features a lot if you think it’s not fully aware of anything. It might not be aware of more recent research. If it’s a serious condition you are researching, just ask it to do sweeps / use research to look for new info about it and find new papers. It might also deepen its understanding.

After you do research you can make a simple artefact and throw it onto the project files. That allows it to refer to it and gain more knowledge about a condition or issue that might not be as rich in the training data.

So, I find GPT to be so so bad for this it made me realise a bit on why the USG is so insistent. Claude Opus is just on a different class.

Here’s the master project prompt:

Act as an expert who’s talking to an interested layman. Engage in detail when requested but be overall succinct in your answers. Short sentences are fine, no need into be lengthy. Do deep research. When arriving at any kind of conclusion or hypothesis assign it a probability and a confidence interval - define this in percentages as in “90%”

On Artefacts - all artefacts should be just text and markdown. Never do anything more complicated with formatting, unless by explicit request.

Don't ask follow up questions unless it's to make for better diagnosis. I.e. don't keep asking questions just to maintain conversation going please. But never hesitate to ask questions if it makes for better outcomes.


Yep. Choosing not to renew a contract with a provider who has voluntarily excluded itself from your use case is respecting that provider's choice and acting accordingly.

The thing is nobody is saying the government is bad for not renewing the contract. Like it or not, that's definitely the administration's prerogative.

What we're seeing here is that when a vendor declines to change the terms of its contractual agreement for ethical reasons, the government publicly attacks it.


Perhaps for ethical reasons but a stated reason by Anthrophic is technical. "But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons."

With the other stated reason being legal. "To the extent that such surveillance is currently legal, this is only because the law has not yet caught up with the rapidly growing capabilities of AI."

I don't think we should lessen Anthrophic's stance from technical/legal to ethical. Just as we shouldn't describe what the department of war is doing as "not renewing a contract".


Not in software though. Clear precedent has been established via EULAs. Software companies set the rules and if users don't like, they can piss off. I don't see why it would be any different for the government.

I'm not a fan of EULAs, I think if you acquire some software anonymously and run it on your own systems you should be able to do whatever you want. however if you want software hosted on someone else's machines, or want to enter into a contractual relationship with them then government or not you should not have the right to compel work from them.

A lot of things are different when it comes to national security, and military.

Congress could come up with an act it it's for national interest.

The military isn't the typical End User.


Congress could, but didn't. Instead, the federal government made threats to retaliate if Anthropic doesn't comply.

Agreed they haven't and it will be difficult to see them voting in favour. But there are precedents. The Patriot act was more radical than a potential mandate for AI providers to prioritize national security.

Depending on the country, their legal value is limited: https://en.wikipedia.org/wiki/End-user_license_agreement#Enf...

The government is armed and can exempt itself from prosecution either by judicial means and/or by naked force. So it isn’t just a cut and dry licensing problem.

Because it's the government? Companies need to follow the rules the government sets, if they like it or not

The government cannot set arbitrary rules, it has to follow the law. (And, at least with a functioning separation of powers, it cannot change the law arbitrarily.)

Um. No, that's not how it works...

> Regardless of the original contract, it's entirely appropriate for a vendor to tell the customer how to use any materials.

Utter nonsense. When the US built the Blackbird, it could only use titanium because of the heat involved in traveling at that speed. But they didn't have enough titanium in the US. So the the US created front companies to purchase titanium from the Soviet Union.

Do you think the US should have informed the Soviet Union what it wanted to do with the metal?


What does the customer informing the vendor have to do with the vendor informing the customer?

Your comparison seems backwards




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: