Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

""" I resigned from OpenAI. I care deeply about the Robotics team and the work we built together. This wasn’t an easy call. AI has an important role in national security. But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got. This was about principle, not people. I have deep respect for Sam and the team, and I’m proud of what we built together. """

- Caitlin Kalinowski, previously head of robotics at OpenAI

https://www.linkedin.com/posts/ckalinowski_i-resigned-from-o...

 help



I 100% understand and agree the AI community argument around lethal autonomy.

But I am trying to understand this from the perspective of defence & govt. Why is it so business as usual for them? Do they consider this at par with missiles with infra-red/heat sensors for tracking/locking? Where does the definition of lethal autonomy begin and end?

Just putting this out there as a point to ponder on. By itself, this may rightly be too broad and should be debated.


I don't think it's about lethal autonomy specifically as much as it's just about government autonomy period. They don’t think private companies should have any veto power over how the government uses some technology they're provided.

On its face that’s not a crazy stance: Governments are meant to represent the public, while private companies obviously aren't. I think it’s somewhat understandable why the government might reject that kind of "we know better than you" type of clause.

Of course, the reaction is wildly out of proportion. A normal response would just be to stop doing business with the company and move on. Labeling them a supply chain risk is an extreme response.


Additionally that kind of public trust only works if you have a government operating under the constraints of a legal framework, and to a lesser extent, an ethical framework. When a government serves the whims of an individual and instead of the function of their office, shirking agreed upon laws, etc, then you no longer have a government serving the people.

Sure, that’s why I said "on its face." This administration is obviously very different than most.

I don’t think Anthropic is wrong to include that clause with this particular administration, and I doubt the administration is internally framing the issue the way I did rather than defaulting to simple authoritarian instincts.

But a more reasonable administration could raise the same concern, and I think I would agree with them.


I don't think it's reasonable to take something the government is supposed to be protecting (right to contract) and turn them into its biggest threat. That's not security, it's letting the night guard raid the museum.

Sure, I said as such:

> Of course, the reaction is wildly out of proportion. A normal response would just be to stop doing business with the company and move on. Labeling them a supply chain risk is an extreme response.


Agree, and I think the labeling of them (Anthropic) a supply chain risk was handled poorly and will likely be reverted over time. That being said, I would be nervous if I was in the Pentagon and depended on Anthropic tooling for something, even if that something was unrelated to kinetic operations. How do they audit that Anthropic can't alter model outputs for contexts they (the ethics board or whatever it's called, can't remember) don't like? If you sell a weapon to the department that is in charge of killing people and breaking things, you don't get a say in who gets killed or how. It's never worked like that.

Maybe the argument is that they should, but I don't agree with that. If Anthropic or any of these other vendors have reservations about the logical conclusion of how these tools will be/are used then they should not sell to the government. Simple as. However ... if the claims Anthropic et al make about how these systems will develop and the capabilities they will have are at all true, then the government will come knocking anyway.


> the government will come knocking anyway.

Dario has even said something along these lines at one point: As the technology matures, it’s very possible the government either nationalizes or semi-nationalizes companies like Anthropic.

That doesn’t seem out of the realm of possibility if they can’t land on a relationship similar to existing defense contractors like Raytheon, where these kinds of discussions obviously don't seem to happen.


If the government wants a frontier LLM for military purposes then they can just put out a tender. Defense contractors like Anduril will bid on it. The end product might be slightly worse than what Anthropic sells but, as my dad used to say, "close enough for government work".

They don’t even need to do that. Elon is almost certainly pushing Grok as hard as possible right now to them, and it’s not like this administration is especially concerned with running a fair procurement process.

So it’s probably some mix of two things:

1) A punitive “bend the knee us or we’ll destroy you,” which fits their track record.

2) Skepticism that Grok is actually as strong as the benchmarks suggest, which is also a pretty reasonable possibility.


> If you sell a weapon to the department that is in charge of killing people and breaking things, you don't get a say in who gets killed or how. It's never worked like that.

I can't agree that this is the right comparison. What is being sold here is not just another missile or tank type, it is the very agency and responsibility over life and death. It's potentially the firing of thousands of missiles.


> How do they audit that Anthropic can't alter model outputs for contexts they (the ethics board or whatever it's called, can't remember) don't like?

I was thinking that Anthropic would just be providing the models/setup support to run their models in aws gov cloud. They do not have any real insight into what is being asked. Maybe a few engineers have the specific clearances to access and debug the running systems, but that would one or two people who are embedded to debug inference issues - not something that would be analyzed by others in the company.

The whole 'do not use our models for mass surveillance' is at the end of the day an honor system. Companies have no real way of enforcing that clause, or determining that it has been violated. That being said, at least historically, one has been able to trust the government to abide by commercial agreements. The people who work in cleared positions are generally selected for honesty, and ability, willingness to follow rules.


I think what you are describing is technically possible (not my immediate domain, however). They don't have real-time insight into what the model is being used for, you are correct about this afaik. But the incident that kicked off this paranoia was Anthopic calling around after the fact to try to find out how JSOC was using the model during the Maduro raid. None of the context of those questions are public, and I doubt they will become public, but it stands to reason that the nature of the questions was concerning enough for the War Department to cause them insist on the "any lawful use" language to be inserted into the contract.

>The whole 'do not use our models for mass surveillance' is at the end of the day an honor system. Companies have no real way of enforcing that clause, or determining that it has been violated.

You are also correct here imo, with one important caveat. Even if private companies have the means for enforcing that clause, it is not their business to do so. Maybe that's the crux of the problem, one of perspective. The for-profit entity in these arrangements is not and can never be trusted as the mechanism of enforcement for whatever we, as a republic, decide are the rules. That is the realm of elected government. Anthropic employees are certainly making their voice heard on how they believe these tools should be used, but, again, this is an is versus ought problem for them.


A counter-argument here: if a private company knows that its technology may be used for human-not-in-loop targeting/surveillance, and knows that its technology is not yet ready to fulfill that use case without meaningful unintended casualties... does that company have an ethical obligation to contractually delineate its inability to offer that service?

In a version of a trolley problem where you're on a track that will kill innocent people, and you have the opportunity to set up a contract that effectively moves a switch to a track without anyone on it, is it not imperative to flip that switch?

(One might argue that increased reaction times might save service members' lives - but the whole point is that if the autonomous targeting is incorrect, it may just as well lead to increased violence and service member casualties in the aggregate.)

And we're not talking about the ethics board manipulating individual token outputs subtly, which would indeed be a supply chain risk - we're talking about a contractual relationship in which, if a supplier detects use outside of the scope of an agreed contract, it has the contractual right to not provide the service for that novel use, while maintaining support for prior use cases.

The fact that the government would use the threat of supply chain risk to enforce a better contract is unprecedented, and it deteriorates the government's standing as a reliable counterparty in general.


It's an interesting question, but it's mostly irrelevant.

This problem is really difficult to discuss because we are all wrapping the capabilities of these tools into our response framing. These are tools, or weapons. Your hypothetical could just as easily be applied to GBU-39s, a smaller laser guided bomb that's meant to take out, say, a single vehicle in a convoy versus the entire set of vehicles. If you're not confident in what the product is supposed to do, and you've already sold it to the government, you have lied and they are going to come back to you asking some direct questions.


> They don’t think private companies should have any veto power over how the government uses some technology they're provided.

On the other hand, why should the government have infinite power to override how a business operates? If you're not able to refuse to sell to the government, isn't that basically forced speech and/or forced labor?


If you want to engage on this topic then you should start by reading up on the Defense Production Act of 1950.

https://www.congress.gov/crs-product/R43767


It’s not obvious that the government should have to power to overwrite this, the US constitution was written as a collection of negative rights exactly to rein in government dictatorial impulses.

And now that we see the government blatantly disrespecting the constitution and the rule of law the civil community must react.


> It’s not obvious that the government should have to power to overwrite this

The government shouldn’t be able to set the terms of its contracts with private companies and walk away if those terms aren’t acceptable? That seems like a stretch.

The constitution is a wildly different premise from government contracting with private companies.


There was no contract, the government wanted to have a contract where they'd be able to use the tool to violate privacy rights of its citizens and issue kill orders without a human present and the company said no.

The government shouldn't be able to coerce a business to do whatever it wants.


> There was no contract, the government wanted to have a contract where they'd be able to use the tool to violate privacy rights of its citizens and issue kill orders without a human present and the company said no.

So the contract process worked. The seller wanted certain clauses, the buyer rejected them, and the deal didn’t happen.

Setting aside the supply chain risk designation, which I already said was an extreme overreaction, this is basically how it’s supposed to work.

> The government shouldn't be able to coerce a business to do whatever it wants.

Governments coerce businesses all the time to do what the government wants. Taxes are the obvious example, but there are many others like OFAC sanctions lists or even just regular old business regulations.

It mostly works because we rely on governments to use that power wisely, and to use it in a way that represents the wishes of the populace. Clearly that assumption is being tested with the current administration and especially in this particular situation, but the government coerces businesses to do what they want all the time and we often see it as a good thing.


For surveillance at least, multimodal AI is old hat: https://en.wikipedia.org/wiki/Sentient_(intelligence_analysi...

If you're one of the contractors working in NRO or aware of Sentient, OpenAI and Anthropic probably do look like supply chain risks. They want to subsume the work you're already doing with more extreme limitations (ones that might already be violated). So now you're pitching backup service providers, analyzing the cost of on-prem, and pricing out your own model training; it would be really convenient if OpenAI just agreed to terms. As a contractor, you can make them an offer so good that it would be career suicide to refuse it.

Autonomous weapons are a horse of a different color, but it's safe to assume the same discussions are happening inside Anduril et. al.


The military has deployed lethal autonomous weapons since at least 1979. LLMs might be useful for certain missions but from a military perspective they're nothing fundamentally new.

https://www.vp4association.com/aircraft-information-2/32-2/m...


My most straightforward read is that the military simply doesn't want their contractors to have a say in the war doctrine. Raytheon doesn't get to say "you can only bomb the countries we like, and no hitting hospitals or schools". It doesn't necessarily mean the Pentagon wants to bomb hospitals, but they also don't want to lose autonomy.

A less charitable interpretation is that the current doctrine is "China / Russia will build autonomous killbots, so we can't allow a killbot gap".

I'm frankly less concerned about "proper" military uses than I am about the tech bleeding into the sphere of domestic law enforcement, as it inevitably will.


>A less charitable interpretation is that the current doctrine is "China / Russia will build autonomous killbots, so we can't allow a killbot gap".

What's the reason this is less charitable, exactly? Do we think this isn't true, or that we think it's immoral to build the Terminator even if China/Russia already have them?


I don't know what you're trying to argue about here. I meant "charitable" as in "not necessarily implying the thing critics worry about". The less charitable interpretation is that the implied thing is true but is seen as a necessity.

We'll leave the morality of war for another time.


> But I am trying to understand this from the perspective of defence & govt.

Hum...

The one thing domestic surveillance enables is defining targets inside the country, and the one thing lethal autonomy enables is executing targets that a soldier would refuse to.

Those things don't have other uses.


Do Chinese do this in China? Walk away from companies that will be used for war? I doesn’t seem to be prevalent and instead they try to take every advantage they can to push their country, China, to become the most dominate in the world. They must be elated to watch the world’s premier tech companies protest the American government and refusal to work with them. If I wanted China to be weaker I’d hope that Chinese companies protested and refused to work with the Chinese government.

It's explicitly illegal in China.

A 2017 national intelligence law compels Chinese companies and individuals to cooperate with state intelligence when asked and without and public notice.

China has no equivalent of the whistleblower protection that enables resignations with public letters explaining why, protests, open letters with many signatures, etc. Whenever you see "Chinese whistleblower" in the news, you're looking at someone who quietly fled the country first and then blew the whistle. Example: https://www.cnn.com/2026/02/27/us/china-nyc-whistleblower-uf...


Isn't that basically the same as a National Security Letter and its attached gag order in the USA?

It's along the same lines, but an NSL can be challenged in court (the FISC is a secret and lopsided court, alas). Companies like Apple and Google have fought specific orders publicly (and possibly some secretly), and some have won.

NSLs are also narrow in scope: they compel data disclosure, not active technical assistance in building surveillance systems like the Chinese law.

The Chinese laws can compel any citizen anywhere in the world to perform work on supporting state military and intelligence capabilities with no recourse. There have been no cases of companies or individuals fighting those orders.


Not at all. If you're an employee at a company that receives a National Security Letter then you can just quit if you want to. Unlike in China, the US government can't force you to keep working there to suit their purposes.

Yes, of course there are people in China who, when their job puts them in conflict with their ethics, will decide to do something ethical. I can't think of any war-related examples, since it's been a while since China was involved in any big wars, but I like the story of Liu Lipeng, who used to work as an internet censor: https://madeinchinajournal.com/2025/04/03/me-and-my-censor/

You have used chatgpt presumably. Based on your interactions with it, do you seriously think it should be allowed to shoot a gun without any human oversight?

That simplistic question is not how things will work. I guess we’ll just get shot by Chinese AI, they will not stop.

You'd rather get shot by domestic bots first?

We have nukes, missiles, bombs, all capable of mass widespread death. Should we give those up too and just let adversaries be the only ones in possession of these types of weapons?

Autonomous robots are one of the adversaries. They're their own side.

One of the things about slave coups in ancient times was that they really believed there are things more important than life.

Yes we should dispense with ethics so we can win at all costs. Like your point isn’t invalid but what’s the point of restating something akin to the trolley problem but this time, as if the answer is obvious.

We can debate philosophy while our adversaries use any means at their disposal. Or we can invest in different ideas, see what works, and choose the best option.

What are we if we throw away the Constitution and allow the Government to punish people/companies that exercise their rights?

China's constitution includes freedom of speech and elections.

Funny thing when you put rights on hold today for 'reasons' they tend to just go away. Look at the US today versus pre 9/11. It's a completely different country with completely different attitudes about freedom and privacy and government over reach and power.


> This was about principle, not people.

Why do I not believe this at all? Were things truly sunshine and roses at OpenAI up until this Pentagon debacle? Perhaps I am mistaken, but it seemed like the writing was on the wall years ago.

> I have deep respect for Sam and the team

I have even more questions now.


Sounds like a statement to ensure they aren’t blacklisted or seen as anti executive.

> they aren’t blacklisted or seen as anti executive.

Which further solidifies my belief that this person is being disingenuous.


How can you respect someone who betrays a principle you care about for money?

Not to mention that the principles are not being betrayed now for the first time.


You could believe it is not about money.

Most importantly, this seems to rest more on if you believe the principle was being followed or not.

It is possible to believe one thing, have another person believe another thing, and respect that their decision is sincerely held but subject to a different perspective, as is our own beliefs.

You can stand up for what you believe and still respectfully disagree with someone with a different stance.

The problem is when you decide that reality always conforms to you opinion. If you assume the other person is aware of that reality and decides differently than it becomes a betrayal of principles. Assuming to know the internal state of another's mind to declare that it is for money becomes disrespectfully presumptuous.

Your problem is not in understanding how X can occur if Y. It is assuming that everyone agrees with you on Y.

You might be right about Y, you might be wrong. Even if you are right, it is still possible that Y is a belief a rational person can hold if their perspective has been different,


> respect someone who betrays a principle you care about for money?

That is one of my many questions too. I am not certain I believe her either. People predicted AI would be used in such nefarious manners way before AI even existed.

Something about the whole resignation and immediate social media post seems more like an attention grab than anything else to me. Whatever her prerogative is, I still believe she is still partially culpable for anything that becomes of this technology -- good or bad.


[flagged]


This person simply does not want to be involved in making autonomous killing machines. What does that have to do with trans?

I’m not sure it makes much sense to call this naive. When the issue is a moral one, conflicts like this are almost inevitable. Ethical convictions often collide with political or institutional decisions. If a company decides to work with the military and someone finds that fundamentally incompatible with their values, stepping away may simply be the only coherent option for them. It’s sometimes just moral consistency. Moral inconsistency can create deep inner conflict and real psychological distress. This being said, many queer people (not only trans people) seem to develop particularly inclusive values, perhaps because they have personally experienced the consequences of harassment, exclusion, or violence. Having gone through that kind of experience can make someone more sensitive to the vulnerability of others.

putting aside the obvious gender-based prejudice in this comment-

since when did the view that "humans should be in the loop before murderbots target and kill someone" become a "naive moral absolutist view of the world"? we're resigned to building the terminator now?


Yes, it's clearly inevitable. The advantages are too great, and the disadvantages too small.

We can't even avoid using weapons where the equation is much more "this is really awful" like cluster bombs and flamethrowers.

You might be a bit behind the times too. There are already plenty of weapons platforms that kill without a human in the loop. I believe the first widely known one was South Korea's sentry gun but that was 10 years ago.


It's you.

Are you keeping your own absolutist assumptions in-check?

what the hell are you talking about ?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: