Hacker Newsnew | past | comments | ask | show | jobs | submit | nilamo's commentslogin

And all the survivors die from radiation? This must be a joke

Theres 8 billion people. Some will survive statistically. Not many but some.

Amazing that some people thought a pseudorandom number generator would be good at diagnosing health issues it can't even see.

Are you suggesting people take up arms against police? Has that ever gone well for anyone, except as a quick way to die?

As a last resort when all other options have failed? Yeah, if you value democracy and don't want to bend the knee and live under an authoritarian state. Ammo box is listed last for a reason, of course, all other avenues should be pursued first.

But that doesn't change the fact that the government isn't going to stop itself from overstepping the constitution, that duty falls with the people via protest, voting, lawsuits, and as a last resort, use of force.


This sounds great... in theory. And just sort of assumes that large casualties are acceptable. Or, even worse, that a lone individual can impart change via a well aimed shot, or something.

Both of which are wild and not something the average person should want or expect to happen. Which makes it even stranger that so many people say it all the time.

Have you stopped renegade cops in your community? Or are you only suggesting that other people do, knowing that anyone who attempts it will die?

It just seems insane to seriously suggest fighting a force that has tanks, drones, etc and has full info on where you are at any moment should they decide to take you out with a sniper, and the willingness to use all of those against you while calling you a terrorist.


There is nothing "insane" about it, it is in fact quite simple and straightforward.

It is far more honest to just say "I don't have the stomach for it/I don't want to die" (and there's nothing inherently wrong with that! most humans feel that way) than to pretend that the very well established precedent across history of violence being the only thing that can oust certain forms of tyranny/injustice is somehow beyond your understanding.


How is this not the exact reasoning MAGA uses for Jan. 6

The problem with the difference between good and bad things is, of course, that one's perspective has an impact.

Americans generally think vandalism is wrong, but also that the Boston Tea Party was a good thing - yadda yadda yadda...


I, for one, do not presume that "reasoning" played any part in what transpired on January 6.

I'm not suggesting it, but taking a look at history, a couple notables are the Battle of Athens and Cliven Bundy standoff. Bundy is still grazing his cattle on that land to this day.

Recent article on the younger Bundy, "Ammon Bundy Is All Alone. The anti-government militia leader can’t make sense of his allies’ support for ICE violence." https://www.theatlantic.com/ideas/2026/02/ammon-bundy-trump-...

Ammon Bundy has held relatively libertarian opinions on immigration for a long long time. Since at least the days of the standoffs. His political ideals are closer to the old time westy classical liberalism (something like founding era anti-federalists with a view of the law that essentially mirrors Bastiat) than they are to neo-conservatism.

Well, the other option is to live while bending the knee. Who needs rights anyway??

Sometimes it's done right, like with The Expanse. Although the writers also wrote some of the episode scripts, so that probably helped...

To each their own I guess. I never found the Expanse television series to be very good when compared to the books.

A structured language without ambiguity is not, in general, how people think or express themselves. In order for a model to be good at interfacing with humans, it needs to adapt to our quirks.

Convincing all of human history and psychology to reorganize itself in order to better service ai cannot possibly be a real solution.

Unfortunately, the solution is likely going to be further interconnectivity, so the model can just ask the car where it is, if it's on, how much fuel/battery remains, if it thinks it's dirty and needs to be washed, etc


>Convincing all of human history and psychology to reorganize itself in order to better service ai cannot possibly be a real solution.

I think there's a substantial subset of tech companies and honestly tech people who disagree. Not openly, but in the sense of 'the purpose of a system is what it does'.


I agree but it feels like a type-of-mind thing. Some people gravitate toward clean determinism but others toward chaotic and messy. The former requires meticulous linear thinking and the latter uses the brain’s Bayesian inference.

Writing code is very much “you get what you write” but AI is like “maintain a probabilistic mental model of the possible output”. My brain honestly prefers the latter (in general) but I feel a lot of engineers I’ve met seem to stray towards clean determinism.


Yep, humans have had a remedy for the problem of ambiguity in language for tens of thousands of years, or there never could have been an agricultural revolution giving birth to civilization in the first place.

Effective collaboration relies on iterating over clarifications until ambiguity is acceptably resolved.

Rather than spending orders of magnitude more effort moving forward with bad assumptions from insufficient communication and starting over from scratch every time you encounter the results of each misunderstanding.

Most AI models still seem deep into the wrong end of that spectrum.


> in order to better service ai

That wasn't the point at all. The idea is about rediscovering what always worked to make a computer useful, and not even using the fuzzy AI logic.


I think it's very likely that machine intelligence will influence human language. It already is influencing the grammar and patterns we use.


I think such influence will be extremely minimal, like confined to dozens of new nouns and verbs, but no real change in grammar, etc.

Interactions between humans and computers in natural language for your average person is much much less then the interactions between that same person and their dog. Humans also speak in natural language to their dogs, they simplify their speech, use extreme intonation and emphasis, in a way we never do with each other. Yet, despite having been with dogs for 10,000+ years, it has not significantly affected our language (other then giving us new words).

EDIT: just found out HN annoyingly transforms U+202F (NARROW NO-BREAK SPACE), the ISO 80000-1 preferred way to type thousand separator


> I think such influence will be extremely minimal.

AI will accelerate “natural” change in language like anything else.

And as AI changes our environment (mentally, socially, and inevitably physically) we will change and change our language.

But what will be interesting is the rise of agent to agent communication via human languages. As that kind of communication shows up in training sets, there will be a powerful eigenvector of change we can’t predict. Other than that it’s the path of efficient communication for them, and we are likely to pick up on those changes as from any other source of change.


Seems very unlikely. My parent said the effects have already started (but provided no evidence), so I assume you mean by less then a generation. I am not a linguist but I would like to see evidence of such rapid shifts ever occurring in anywhere the history of languages before I believe either of you.

I have a feeling you only have a feeling, but not any credible mechanism in which such language shifts can occur.


I am a little confused. Every year language changes. Young people, tech, adapting ideologies, words and concepts adopted from other languages, the list of language catalysts is long and pervasive.

Language has never stood still.


GPs original claim was “[Machine Intelligence] already is influencing the grammar and patterns we use.”

Your claim above was “AI will accelerate “natural” change in language like anything else.”

Now these are different claims, but I assumed you were backing up your parent’s claim. These are far stronger claims than what you write now, in a way that it feels like you are arguing in Motte and Bailey.

First of all if GP claim is true, linguists should be able to find evidence of that and publish their findings in a peer review papers. To my knowledge, they have not done that.

Second of all, your claim about AI “accelerating” changes in natural language is also unfounded, unless you really mean “like everything else”, in which case your claim is extremely weak to the point where it is a non-claim (meaning, you are not even wrong[1]).

1: https://en.wikipedia.org/wiki/Not_even_wrong


> These are far stronger claims than what you write now

> unless you really mean “like everything else”, in which case your claim is extremely weak

Language responds to changes in context. Books, the printing press, radio, the web, social media, mobile web, all changed how people used language and impacted language.

AI is a dramatic new context, with unique properties:

1. It is the first artifact to actively participate in realtime natural language communication. In a striking break with those predecessors.

2. AI language capabilities evolve quickly, and are unlikely to stop soon.

3. As learning during inference becomes prevalent, we will be co-adapting communication with AI in realtime.

4. Model to model communication is in its infancy, but is an entirely new category of language use, by entirely new users.

No preceding change to language context or purpose comes close.

Holding out for studies is reasonable, to determine the level of change. But statements of "not even wrong" make no sense. The default is changes in communication context and purpose, drive changes in language.

Language has never been static or unresponsive to new contexts.


My not even wrong argument was contingent on the weakest interpretation of your argument, where AI would change the language exactly like anything else in human society changes language.

> Changes in how we use language with AI will change even faster when AI starts learning continuously during inference.

This is a stronger claim, and shows that the weakest interpretation of your argument does not apply. I take not event wrong back. As this is a testable hypotheses which offers a solid prediction. I can in fact be wrong.

That said, I am skeptical of your claims for the reason stated above. People don’t interact with LLM’s nearly as much as they do with their dogs, and I am not aware of any research that shows that people who interact with a lot of dogs simplify their languages in human-to-human communication. To the contrary, there is ample research that humans are in fact quite good at context switching. You can speak extremely poorly in a second learning you are currently learning, and then in the next sentence speak fluently without hesitation in your native language.


I suggest that dogs are not a good comparison.

Interaction with language models involves a significant use of language and thought. Is not repetitive. And many users (myself included) continually find new ways to use them.

Others may take their time adopting language models, or be slower to branch out into many kinds of use, but young people in particular will be very fast adopters and adapters. That will be the place to watch.

"Even faster" with respect to inference learning, wasn't an attempt to undersell changes happening now. Teachers are experiencing a lot of new issues with how students respond to the availability of models today. One being the potential for students to put less effort into their own communications. If that continues, it won't just be a "dumbing" of literacy, it will have its own impact on vocabulary and grammar.

But looking forward is unavoidable. Models are not going to stay still long enough to say what stage impacted what changes. Model changes are too fast and fluid.

Well, this era is just getting started, so a diversity of expectations makes sense.


I think you might be underestimating human-to-dog interactions. Interacting with dogs require a whole lot of empathy and thought.

But really this is beyond the point. I didn’t provide dog interactions as an analogy, rather, I provided it as a counter point. We speak differently to dogs then we speak with each other, and have done so for thousands of years. I see no reason why LLM’s would have any more profound effects on our language. We will continue to speak with each other in a normal manner just like before.


> We will continue to speak with each other in a normal manner just like before.

We may just be operating on different versions of "significant" change. Because I do agree with that statement.

I just think there will be language changes directly tied to adaption/adaption with models in our lives. In addition to the normal drift and adaptation. And that the rate of language change is likely to be faster, both due to interaction with models, and indirectly due to accelerated changes in general.


> Convincing all of human history and psychology to reorganize itself in order to better service ai cannot possibly be a real solution.

I'm on the spectrum and I definitely prefer structured interaction with various computer systems to messy human interaction :) There are people not on the spectrum who are able to understand my way of thinking (and vice versa) and we get along perfectly well.

Every human has their own quirks and the capacity to learn how to interact with others. AI is just another entity that stresses this capacity.


Speak for yourself. I feel comfortable expressing myself in code or pseudo code and it’s my preferred way to prompt an LLM or write my .md files. And it works very effectively.


> Unfortunately, the solution is likely going to be further interconnectivity, so the model can just ask the car where it is, if it's on, how much fuel/battery remains, if it thinks it's dirty and needs to be washed, etc

So no abstract reasoning.


That's great! But it doesn't mean we've finished improving.


Completely agree.

Personally I’m optimistic we’ll continue the trend line of improvement.


I wish I understood how that works. Retail investors are so small, compared with hedge funds and whatnot, that "average people" cannot move a stock price significantly. So, when Trump tweets about a company, how does the stock move? Who is actually doing all that selling to drive the price down?

And, since the price almost always recovers within a week... does it even matter?


Trump has big money friends that control non-retail investment. The tweet is just signalling.

That kind of access and "control" is why they think they can just tweet at Coke to stop using artificial dyes instead of, you know, changing the rules at the organization they run.


It's a tongue-in-cheek description of how buses and trams work. ie: It's not a new idea, we just tacked "driverless" onto it.


Why fix one problem, when another problem also exists?


Classic false dilemma. You're trying to frame my comment as “we can only ever fix one problem” when it is, in fact, “we have constrained resources and urgent systemic failures and so prioritisation is important”.

For example, Budget 2026 did not address the €307 million structural shortfall in university funding. Is basic income for artists a better allocation than third-level education? Or capital expenditure on cancer care? Or NAS opex?

I specifically disagree with this allocation of funds as we live in country filled with specific solvable structural and life-limiting problems that should be solved before artist wellbeing.


That's beside the point? Gaining security by losing freedom was always on the table. What's interesting is the cultural shift toward not caring about losing freedom.


I think it is the point: there is a balance between freedom and safety.

For example, it is illegal to carry a loaded handgun onto a plane. Most people would agree that is an acceptable trade of freedom for safety.

There are places with even less safety and more “freedom” than the US so people who take an absolutist view towards freedom also need to justify why the freedoms that the US does not grant are not valuable.


> I think it is the point: there is a balance between freedom and safety.

Sometimes. But freedom and security are not always opposed.

It’s possible to trade freedom for security but it’s also possible that freedom creates security. Both can be true at the same time. Surveillance, not security, is what opposes freedom. Surveillance simply trades one form of insecurity for another at the cost of freedom.

> For example, it is illegal to carry a loaded handgun onto a plane. Most people would agree that is an acceptable trade of freedom for safety.

A well regulated Militia, being necessary to the security of a free State, the right of the people to keep and bear Arms, shall not be infringed.

2A seems to make the case that the freedom to bear arms creates security. Given how history played out it’s hard to argue against. I’m not arguing we should be able to take guns on planes but 2A is an example of freedom creating security.


Everything I want to do in public I can still do.

What "freedom" is lost? I gain security and lose no freedoms (unless you are doing something illegal).

When property crime is up 53%.. plenty of people are willing to lose "freedom" whatever you are referring to, in exchange for safety.


How about just general privacy? I mean do you really want someone / the government to be able to track everywhere you go?

- Going to your girlfriends place while the wife is at work

- Visiting a naughty shop

- Going into various companies for interviews while employed

With mass surveillance there is the risk of mass data leak. Would you be comfortable with a camera following you around at all times when you're in public? I wouldn't be.


You were recorded smoking marijuana, an illegal drug at the federal level.

You were recorded walking into an abortion clinic, although face recognition identified as a resident of a state where abortion is illegal.


Well aren’t both of those things crimes? I’m not a fan of mass surveillance either but maybe pick a different example.


The second is clearly not. State governments don't have jurisdiction over their residents when they are out of state.


Read about Texas.

It's a crime to leave the state to get an abortion. They can prosecute when you return home.

There have been vigilante patrols in West Texas, watching the necessary routes out of the state. The law gives any resident the grounds to turn in their neighbor for planning to get an abortion.


Is "crime" one and the same as "wrong"?


The solution is to change the laws, not to stop enforcing them. Otherwise this is basically just giving up on the concept of having laws.


The point is to maintain pressure so that even when the law becomes unjust, people aren't immediately harmed.


Selective enforcement has always been the law of the land.


The right to privacy, to not let the government have a master record of everywhere you've ever been and everything you've ever said just in case they decide to someday revoke free speech and due process, or decide it doesn't apply. Lately we have plenty of examples of how quickly that can happen.


The Stasi were "tough on crime" too, back when that was expensive. How quickly we forget. Well, you're welcome to find a panopticon to live it, but excuse other for not finding it a good tradeoff.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: