Altman tweet:
“Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement.”
From that it reads like the administration quickly agreed to the terms Anthropic wanted with OpenAI instead.
Does "putting them in the agreement" mean "we will never allow them," or "we will not allow them if they are illegal?" Here's a link which says that the DoD was willing to make up with anthropic any time if they allowed surveillance of Americans: https://www.axios.com/2026/02/27/anthropic-pentagon-supply-c...
Seems like Altman wants to spin this as the same principled stand anthropic took, but they really caved to the DoD's "all legal applications" framing. Up to you to decide how much you think the law restrains the Pentagon here.
I’m doing enterprise coding tasks that used to take a month of whole team coordination from mockups to through development and testing in 3 days now. It’s all test driven development, codex 5.3 and a small team of two people who know how to hold it right orchestrating the agents. There’s no reason not to work this way. The sociotechnical engineering aspects of this change are fascinating and rewarding to solve.
I work for an old enterprise, so far rather conservative with LLM/AI usage. However the copilot cli adoption in the last 2 weeks is spreading light wild fire. Codex 5.3, a good instructions file and it works. Features are getting done and delivered in days, proper test coverage is done, proper documentation in place. Onboarding to it is also very fast.
Porting tons of untyped js legacy front end code to vue with typescript and figma designs. Highly configurable business to business app (i.e lots of permutations). Everyone seems to have a “system”. I recommend looking at the OpenAI Cookbook for long running plans and do TDD to the extreme. https://developers.openai.com/cookbook/articles/codex_exec_p...
The 40k lines of code a day crows are amusing. In solving any problem solvable by code, there's a ratio of non-coding work to coding work, and codex et al all help immensely with the coding work but help less with the non-coding work.
Non-coding work is thinking about the system architecture, thinking about how data should flow, thinking about the problem to be solved, talking with people who will use it, discovering what their objectives are.
Producing 40k lines a code per day simply means you're not doing any of that work: the work that ensures you're building something worth building.
Which is why the result is massive, pointless things that don't do the things people actually need, because you've not taken any time to actually identify the problems worth solving or how to solve them.
It's a form of mania that recalls Kafka's The Burrow, where an underground creature builds and builds an endless series of catacombs without much purpose or coherence. When building becomes so easy when it was so hard -- and when it becomes more fun to build and watch codex's streams of diffs fly by, than to plan -- we forget the purpose of building, and building becomes its own purpose, which is why we usually so little actual productive impact on the world from the "40k lines of code a day" cohort.
Just because tests pass does not mean that they're testing the right thing to begin with. Reviewing tests is as important, if not even more important than reviewing code.
I agree with your point that the original claim is unlikely to be true (and would be extremely foolish behavior even if it were true). I don't think it's good to flame people though, even if they did say something unreasonable.
> "I am able to push 30-40K lines of nearly perfect code a day now."
It is physically and physiologically impossible for anyone to be reviewing "30-40K lines of nearly perfect code a day" to the extent needed to push it with confidence in a sensible development process.
Why do you and many of your industry friends conveniently never actually post their 'perfect code' when asked for proof? I've asked like five different people now that make these claims and they just vanish into the ether.
I wonder if a large chunk of the population choosing to only buy non-discretionary goods for an extended period of time might freak policy makers out more. Not a targeted boycott. Not a strike still going to work. Lower effort to participate. For example if this caused US Amazon orders to fall by a 1/4 for two weeks and similarly across all retailers.
Low effort to participate isn’t a feature. The point of these kind of actions is to show that there’s a lot of people who are really fired up and won’t be placated or deterred unless policymakers meet their demands.
Sort of? You want something that's going to actually affect the corporations involved. It's not about showing effort, because the government doesn't care how much effort you put in. It's about showing power, making a statement that we "the people" have power and can use it if you don't do what we want. A long-term "nonessentials boycott" might be more impactful in that sense.
Thanks this made my day! Well done. Currently exploring “I’m a bowling ball and need all surfaces and obstacles to be smoothed and graded so I can progress through the game. You must accommodate this for me or I can’t play.” The GM is creating gusts of wind for me to get around.
What if someone distributed contraband rechargeable tablet devices running an offline open source LLM into a knowledge desert where the government limits education, censors information, and blocks the internet to control?
I agree. I have a Nord Drum 3P that’s FM percussion modeling with drum pads. I can get close to these sounds and a lot more stuff that bends when you hit the pad harder. About half the price on Reverb. The Phase 8 is a cool idea.
Recently I was using docling to transform some support site html into markdown and replacing UI images with inline descriptive text. An LLM created all the descriptions. My hope was descriptions like “a two pane..below the hamburger…input field with the value $1.42…” would allow an LLM to understand the UI when given as context in a prompt. Maybe I could just put ASCII renderings inline instead.
I visited a heart doctor at Duke research medical center a few years back. His comments then were that dairy products were the most inflammatory foods for humans and a major contributor to heart disease by gunking up our bloodstreams.
From that it reads like the administration quickly agreed to the terms Anthropic wanted with OpenAI instead.
reply