Wait, what? So if I'm a paying Max user, i'd still have to pay more? Don't see the value. Would rather have a repo skill to do the code review with existing Claude Max tokens.
Not that I want to be seen blindly defending the CIA's actions or anything, but the article's saying that people accusing the CIA of keeping this hidden in a vault for 60 years.
Yet, the article has been out in the open for 12 years and people still didn't notice it. So why are we assuming malice in the first case when we can assume that, like it wasn't discovered in the open, it was probably just stuck in a vault and nobody knew it existed closed either.
So this just sounds like "we found one obscure document that had an interesting detail..." and is trying to extend a whole thing out of it?
I can imagine that these kinds of things will all slowly come to light with AI-led discovery now.
Today every single software engineer has an extremely smart and experienced mentor available to them 24/7. They don't have to meet them for coffee once a month to ask basic questions.
That said, I still feel strongly about mentorship though. It's just that you can spend your quality time with the busy person on higher-level things, like relationship building, rather than more basic questions.
How would this affect future generations of ... well anyone, when they have 24/7 access to extremely smart mentor who will find solution to pretty much any problem they might face?
Can't just offload all the hard things to the AI and let your brain waste away. There's a reason brain is equated to a muscle - you have to actively use it to grow it (not physically in size, obviously).
I agree with you about using our brains. I honestly have no idea.
But I can tell you that, just like with most things in life, this is yet another area where we are increasingly getting to do just the things we WANT to do (like think about code or features and have it appear, pixel pushing, smoothing out the actual UX, porting to faster languages) and not have to do things most people don't want to do, like drudgery (writing tests, formatting code, refactoring manually, updating documentation, manually moving tickets around like a caveman). Or to use a non tech example, having to spend hours fixing word document formatting.
So we're getting more spoiled. For example, kids have never waited for a table at a restaurant for more than 20 mins (which most people used to do all the time before abundant food delivery or reservation systems). Not that we ever enjoyed it, but learning to be bored, learning to not just get instant gratification is something that's happening all over in life.
Now it's happening even with work. So I honestly don't know how it'll affect society.
Just because you have every instruction manual doesn't mean you can follow and perform the steps or have time to or can adapt to a real world situation.
> My experience is that people who weren't very good at writing software are the ones now "most excited" to "create" with a LLM.
I've been a tech lead for years and have written business critical code many times. I don't ever want to go back to writing code. I am feeling supremely empowered to go 100x faster. My contribution is still judgement, taste, architecture, etc. And the models will keep getting better. And as a result, I'll want to (and be able to) do even more.
I also absolutely LOVE that non-programmers have access to this stuff now too. I am always in favor of tools that democratize abilities.
Any "idiot" can build their own software tailored to how their brains think, without having to assemble gobs of money to hire expensive software people. Most of them were never going to hire a programmer anyway. Those ideas would've died in their heads.
> I also absolutely LOVE that non-programmers have access to this stuff now too. I am always in favor of tools that democratize abilities.
Programming was already “democratized” in the sense that anyone could learn to program for free, using only open-source software. Making everyone reliant on a few evil megacorporations is the opposite of democratization.
You know what they mean by that term, it's about building things without needing to put in the learning effort. I have bosses building small POCs via vibe coding, something they would not have done via learning to code and typing it manually.
It's the same sort of argument artists use when it comes to AI generated media, there obviously is a qualitative difference in the people now able to generate whatever they want versus needing to draw something by hand, so saying "they could've just learned to draw themselves" is not very convincing. People don't want to do that yet still get an output, and I see nothing wrong with that, and if you do, it's just another sort of gatekeeping, that the "proper" way is to learn it by hand.
Okay, they are paying Anthropic, I don't really care about the semantics when at the end of the day they get an output they didn't get before.
Open weight models run on consumer hardware, if you think corporations will make every single piece of hardware unaffordable then I'm not sure what to tell you.
If AI completely erases the profession of software developer, I'll find something else to do. Like I can't in good faith ever oppose a technology just because it's going to make my job redundant, that would be insane.
Take that to its extreme. Suppose there was a technology that you do not own that would make everyone's job redundant. Everyone out of a job. There is no need for education, for skills to be mastered, for expertise. Would it still be insane to complain?
Isnt' that what old-school software did for many years? It used to take jobs, just not from developers. If you implement software that takes accounting from 10 people to 2, 8 just got fired. If you have Support solution helping one support rep answer 100 requests instead of 20, you just optimised support force by the rate of 1 to 5.
I'm in the boat of SaaS myself, but feel a bit dishonesty from Senior devs complaining about technology stealing jobs. When it was them doing the stealing, it was fine. Now that the tables have turned, it's not technology is bad
Jevons' paradox still exists. Making X cheaper (usually by needing fewer people to do one unit of X) can and often does lead to more people being needed for X.
take that to absolute extreme. Why do we even need a job? If all our physical needs are met maybe humanity can finally focus on real problems (spiritual, mental, inter personal) that no amount of "jobs" can solve...
I think that a what a lot of anti-AI folks are trying to argue without saying it explicitly is that it already is a systemic problem. They're not necessarily against the technology on its own, but against the systemic problems it would introduce if society doesn't take a stance against it.
I don't have an answer for this, and won't pretend to.
But my take on this is that accountability will still be a purely human factor. It still is. I recently let go of a contractor who was hired to run our projects as a Scrum/PM, and his tickets were so bad (there were tickets with 3 words in them, one ticket was in the current sprint, that was blocked by a ticket deep in the backlog, basic stuff). When I confronted him about them, he said the AI generated them.
So I told him that:
1. That's not an excuse, his job is to verify what it generated and ensure it's still good.
2. That actually makes it look WORSE, that not only did he do nearly 0 work, that he didn't even check the most basic outputs. And I'm not anti-AI, I expressly said that we should absolutely use AI tools to accelerate our work. But that's not what happened here.
So you won't get to say (at least I think for another few years) "my AI was at fault" – you are ultimately responsible, not your tools. So people will still want to delegate those things down the chain. But ultimately they'll have to delegate to fewer people.
In general I agree. But it’s somehow very unlikely for the AI to generate a three word ticket. That’s what humans do. AI might generate an overly verbose and specific ticket instead.
Just sold a house/moved out after being laid off in mid-January from a govt IT contractor(there for 8 great years and mostly remote). I started my UX Research, Design and Front End Web Design coding career in 2009, but now I think it's almost a stupid go nowhere vanishing career, thanks to AI.
I think much like you that AI is and will just continue to destroy the economy! At least I got to sell a house and make a profit--stash it away for when the big AI market crash happens (hopefully not a 2030 great depression tho). As then it's a down market and buying stocks, bitcoin and houses is always cheaper.
Any given system will still need people around to steer the AI and ensure the thing gets built and maintained responsibly. I'm working on a small team of in-house devs at a financial company, and not worried about my future at all. As an IC I'm providing more value than ever, and the backlog of potential projects is still basically endless- why would anyone want to fire me?
The difference between having a non-technical person and someone who is capable of understanding the code being generated and the systems running it is immense, and will continue to be so over the foreseeable future.
Why would it need people to steer the AI? I can easily see a future where companies that don't rely on the physical world (like manufacturing) are completely autonomous, just machines making money for their owner.
It's easy to imagine but there's still a vast amount of innovation and development that has to happen before something like that becomes realistic. At that point the whole system of capitalism would need to be reconsidered. Not going to happen in the foreseeable future.
Congrats - you caused me to create an account to reply, due to the sheer density of your incorrectness.
- First, the LTV was not Marx's idea. Adam Smith held the same view, as did many many others during this era. Marx refined this idea, but there's nothing about your point that is unique to his version of it.
- Second, while LTV is not widely used today, this is not because it was "completely disproven" (can you cite anything to back this claim up?). It is because economics shifted to a different paradigm based on marginal utility. These two frameworks operate at different levels of abstraction and address different aspects of the price of goods. There is actually empirical evidence of a correlation between the cost of a good and the cost of the labour, at an aggregate level.
- Third, Marx explicitly differentiated between _value_ and _price_. LTV deals with value exclusively (in other words, what happens when externalities impacting price are accounted for). He would have had no issue accepting that externalities impacting supply and demand would impact price.
The final irony of your comment is that the commenter's claim that you are incorrectly analysing is actually also fully defensible under your (presumably) neoclassical view of economics. In competitive markets, reduced production costs lead to reduced equilibrium prices as competitors undercut each other. The proposition that in the long run, under competition, price tends toward cost is a standard result in microeconomics. The idea that "you charge the maximum the customer is willing to pay" only holds without qualification in monopoly or monopolistic competition with strong differentiation, which are precisely the conditions that increased software supply would erode.
> I also absolutely LOVE that non-programmers have access to this stuff now too. I am always in favor of tools that democratize abilities.
Here's the other edge of that sword. A couple back-end devs in my department vibe-coded up a standard AI-tailwind front-end of their vision of revamping our entire platform at once, which is completely at odds with the modular approach that most of the team wants to take, and would involve building out a whole system based around one concrete app and 4 vaporware future maybe apps.
And of course the higher-ups are like “But this is halfway done! With AI we can build things in 2 weeks that used to six months! Let’s just build everything now!” Nevermind that we don’t even have the requirements now, and nailing those down is the hardest part of the whole project. But the higher-ups never live through that grind.
This scenario is not new with AI at all though? 14 years ago I watched a group of 3 front-end devs spin up a proof of concept in ember.js that has a flashy front end, all fake data, and demo it to execs. They wowed the execs and every time the execs asked "how long would it take to fix (blank) to actually show (blank)?" the devs hit f12, inspect element, and typed in what they asked for and said "already done!".
It was missing years of backend and had maybe 1/20th feature parity with what we already had and it would have, in hindsight, been literally impossible to implement some of the things we would need in the future if we had went down that path. But they were amazed by this flashy new thing that devs made in a weekend that looked great but was actually a disaster.
I fail to see how this is any different than what people are complaining about with vibe coded LLM stuff a decade and a half later now? This was always being done and will continue to be done; it's not a new problem.
It reemphasizes the question of importance. Would a user accept their data
needing a AI implementation of a ("manual") migration and their flow completely changing?
Does reliability to existing users even matter in the companies plans?
If it isn't a product that needs to solve problems reliably over time then
it was kind of silly to use a DBA that cost twice the Backend engineer
and only handled the data niche. We progressed from there or regressed
from there depending on why we are developing software.
The models will not keep betting better. We have pased "peak LLM" already, by my estimate. Some of the parlour tricks that are wrapped around the models will make some incremental improvements, but the underlying models are done. More data, more parameters, are no longer doing to do anything.
This is true in most scenarios, and the opposite is also true – that Americans are famously friendly, and even though Canadians may not want to visit to make a point, I think even they would agree that most day to day interactions they'd have would be warm and welcoming.
There might be a bit more hockey ribbing for the next few weeks, but I know there's a ton of respect for Canada's team.
At the end of the day, the idea of "My problem is with the government, and not the people" is as old as time.
I am also a Canadian who has decided not to visit the US until further notice, and honestly, I'm sad about it.
In my 20+ years of regularly travelling to the States, I've almost always had great interactions with the people I've met in all parts of the US I've visited, and I've been all over. "Warm and welcoming" is a very good description.
I think you’re all within your right to keep your distance from us. Our disgraceful leadership, even if it doesn’t accurately represent our people, we must suffer under it but no reason for you to do the same. We just hope you’re aware it’s only a few more years and we can begin to heal the whole relationship with more sane leaders that hopefully do see the strength and value in a positive relationship with a northern neighbor.
If not, please send help or accept our political refugees because we will have become permanently screwed if this behavior continues past our current orange phase.
Elections are not good sample of our collective values. The approval rating is quite low and is probably a better measure.
But when it comes to elections, first, somehow “we” get 2 bad choices every time. This last time, I personally feel they were 2 incredibly terrible choices. Then the fumbling from the other side basically assured orange man’s victory. It was a disaster of an election (but sadly appropriate as it seems like every thing we do is a disaster now.)
We also have a low voter turnout. So the result isn’t really complete and probably has some bias.
We also have an electoral college which means the winner can have less than 50% of the popular vote and win.
I could probably go on but I feel the point has been made that election outcomes are not the proxy you think
If "both sides" are equally bad, then both sides equally represent the people, no?
> I could probably go on but I feel the point has been made that election outcomes are not the proxy you think
The purpose of a system is what it does. There are not many grassroots efforts to change the many negatives you listed. Tacit approval - whether through nor voting or not fixing what is broken - does not lessen culpability. The outcome is still accurate representation on the aggregate.
If 4 housemates always have a dirty kitchen, it's a reflection on all of them. It may fall short of their ideals, or they can blame Bob for not doing dishes, not fixing a problem whose root they know is an indictment, not an excuse.
Most Canadians are visiting Hawaii and California, not Arkansas and South Dakota, so the point still stands for the states most people are going to. (Although Florida and Arizona are both pretty popular destinations too, which somewhat contradicts my point)
South Dakota actually has a few decent tourist attractions west river: (Mt Rushmore, Badlands, Crazy Horse).
With its proximity to Canada, and relative cheapness, likely pulls in quite a few tourists from up North.
One additional South Dakota attraction (although lessening interest as of late) is how much hunting/fishing is available, and how much the community is interested in the ‘visiting’ hunter.
Oh, I wasn't aware of that, thanks! I guess I was only thinking of warmer places, since that's where I tend to travel to. I personally live a bit too far north to drive to the US (in a reasonable amount of time), so I completely forgot that the US is close enough for a summer road trip for most Canadians.
Same has been true the 2-3 times I've visited Canada. I don't think that'll change. I remember how things got pretty heated during the run up to the Iraq War. And we hope that the friendship will endure.
Ironically in my experience anyways, this is true more so in parts that are more strongly "Canada should be the 51st state" politically. e.g. the south, where I find day to day interactions with people there are much more friendly than say California.
I have two 27" 5k displays (both more than 5 years old, so they're not HDR or 120Hz).
I know I'm privileged but my biggest issue with them isn't the HDR or 120Hz. It's that the seam between them causes me to not be able to use that "middle" real estate.
So I was side-eyeing a 6k display cuz it would have most of the benefits of a dual 4k but more real estate and more flexibility in windowing.
The curved displays also look quite promising (like the Neo G2), but not feeling like spending money when I have two perfectly good monitors that already work.
Curved displays are polarizing - I definitely am not a fan although some people I know love them. In the olden days of dual monitors when the aspect ratios were more square it made a lot more sense to have them side by side. Now with the aspect ratio so wide it's weird having dual displays side by side. I'm actually tempted to move back to a single 27" display (currently have dual 24" 4k with one oriented vertical and one horizontal).
I literally spent the last 30 mins with DaisyDisk cleaning up stuff in my laptop, I feel HN is reading my mind :)
I also noticed this 10GB VM from CoWork. And was also surprised at just how much space various things seem to use for no particular reason. There doesn't seem to be any sort of cleanup process in most apps that actually slims down their storage, judging by all the cruft.
Even Xcode. The command line tools installs and keeps around SDKs for a bunch of different OS's, even though I haven't launched Xcode in months. Or it keeps a copy of the iOS simulator even though I haven't launched one in over a year.
> I think it’s because eventually the “few thousand auditable lines” idea vanishes with enough skills added?
I just watched a youtube interview with the creator. He actually explains it well. OpenClaw has hundreds of thousands of lines you will never use.
For example, if I only use iMessage, I have lots of code (all the other messaging integrations) that will never be used.
So the skills model means that you only "generate code" that _you_ specifically ask for.
In fact, as I'm explaining this, it feels like "lazy-loading" of code, which is a pretty cool idea. Whereas OpenClaw "eager-loads" all possible code whether you use it or not.
And that's appealing enough to me to set it up. I just haven't put it in any time to customize it, etc.
I totally get that, and I'm reminded of plugin architectures (e.g. VSCode extensions or browser extensions).
Those extensions don't modify the core codepaths for what they integrate with, but still provide new capabilities for only what I want to use.
I guess I don't see extensibility, agentic capabilities, and more code safety (and fewer tokens burned on codemods) as mutually exclusive. Not saying you're saying that fwiw.
it included flights, hotels, food and travel expenses for 9000 for multiple days, as well as the "party".
US-based travel for 1 person for 5 days is easily 4K, on top of that some people were probably international so it would be higher, and on top of that there are the "party" expenses like venue and catering which probably wasn't that significant.
Sorry but these are just not accurate as blanket statements anymore, given how good the models have gotten.
As other similar projects have pointed out, if you have a good test suite and a way for the model to validate its correctness, you can get very good results. And you can continue to iterate, optimize, code review, etc.
reply