Hacker Newsnew | past | comments | ask | show | jobs | submit | boredemployee's commentslogin

Funny that this is on the front page of HN. I’m currently attending a 3 day in person immersive course at a university. For what applications are you guys using it for? Curious about the potential

I work with and contribute to a QGIS plugin that manage water and sewage network data. Together with PostGIS, it's a powerful tool.

https://github.com/giswater


I used it to map out storage locations and refill stations at our online grocery picking stations, then export it to read in using geopandas in order to calculate the shortest distances between all locations!

Viewing a GPX file that I also view on Osmand (Android). QGIS can be configured to display the POI colors by `type` ("restaurant" is red, etc). Combined with a handrolled script which adds Osmand's non-standard markup, I am granted the superpower of... being able to distinguish between points on both mobile and desktop.

I used it to write papers about glaciovolcanism early in my career. Later, I used it to study caves on the Moon.

I've used it for gixapixel scale rendering projects. Giant wall murals at 300 DPI.

QGIS is incredibly powerful.


all the forests i managed have a complete qgis cartography: species plots, age plots, density/volume, etc. also i produce maps for the workers to get to the spot i need them to do stuff: log, make a road, plant treelings, survey a pond, whatever.

I used it to examine results of objects a model detected out of an aerial images

Make maps of distant relatives' locations for a genealogy project

I think it’s already time for us to stop calling these things "intelligent" or using the word intelligence when referring to LLMs. These tools are very dangerous for people who are mentally fragile.

We should stop using any term that ascribes a description that could make them seem human... period.

Nonsensical terms like the thing is 'thinking'? Seriously. Cut the crap.


I try to avoid calling LLMs intelligent when unnecessary, but it runs into the fundamental problem that they are intelligent by any common-sense definition of the term. The only way to defend the thesis that they aren't is to retreat to esoteric post-2022 definitions of intelligence, which take into account this new phenomenon of a machine that can engage in medium-quality discussions on any topic under the sun but can't count reliably.

I don't have a WSJ subscription, but other coverage of this story (https://www.theguardian.com/technology/2026/mar/04/gemini-ch...) makes it clear that Gemini's intelligence was precisely the problem in this case; a less intelligent chatbot would not have been able to create the detailed, immersive narrative the victim got trapped in.


It's interesting how the Turing Test was pretty widely accepted as a way to evaluate machine intelligence, and then quietly abandoned pretty much instantly once machines were able to pass it. I don't even necessarily think that was incorrect, but it's interesting how rapidly views changed.

Dijkstra said, "The question of whether a computer can think is no more interesting than the question of whether a submarine can swim." Well, we have some very fish-y submarines these days. But the point still holds. Rather than worry about whether these things qualify as "intelligent," look at their actual capabilities. That's what matters.


Basically the only reasonably proposed Turing test is the one defined in the Kurzweil-Kapor wager[0] which has never been attempted.

[0]: https://en.wikipedia.org/wiki/Turing_test#Kurzweil%E2%80%93K...


As far as I know, we haven't done any proper Turing Tests for LLMs. And if we did, they would surely fail them.

"Proper" may be doing some work here, but such a test was run last year and GPT-4.5 and LLaMa-3.1-405B both passed. Oddly, GPT-4.5 was judged as human significantly more often than chance. https://arxiv.org/abs/2503.23674

Dude, you're in a Turing test right now. Conservatively, 10% of comments on this site are LLM output. We're all conversing with robots.

Nope, you are!

We will never prove AI is intelligent.

We will only prove humans aren't.


And the machine came into existence all on its own did it? Another absolutely stupid comment.

Do you people actually 'think' before posting, or, have you handed that off to LLMs entirely?


I'm doubting you did.

https://en.wikipedia.org/wiki/AI_effect

>The AI effect refers to a phenomenon where either the definition of AI or the concept of intelligence is adjusted to exclude capabilities that AI systems have mastered

Hence once AI reaches the point of human intelligence, by the AI effect, humans will stop being intelligent.


So are a lot of humans.

Sure but my father isn't asking his fellow humans unanswerable questions about God and the universe. People don't treat other people as omnipotent, but they sure as hell treat LLMs as though they are.

>People don't treat other people as omnipotent

Funny you mention God and this statement, because believing in any particular God that says they are omnipotent is believing that humans are, since you know humans made this crap up.

Given the opportunity a very large part of the population will quickly absolve themselves of any responsibility and put it on another human/system/made up entity.


People have bowel movements, too; should we be building a machine that produces fecal matter at an industrial scale?

What a silly comparison.


Yeah I don't know why this thought terminating cliche of "but also humans" is so persistent, especially when it's almost always used to dismiss consequences of using LLMs that would never be acceptable when actually applied to humans.

A human gaslighting another human into suicide? They're a sociopath, maybe a criminal. Certainly not normal.

An LLM does the same? Humans do that all the time, what's the issue? ¯\_(ツ)_/¯

Is it just misanthropy? I don't get it.


So is television. So are books. Vulnerable people shouldn't have unfettered access to things that can lead to dangerous feedback loops and losing their grasp on reality.

People who are vulnerable to this type of thing need caretakers, or to be institutionalized. These aren't just average, every day random people getting taken out by AIs, they have existing, extreme mental illness. They need to have their entire routine curated and managed, preventing them from interacting with things that can result in dangerous outcomes. Anything that can trigger obsessive behaviors, paranoid delusions, etc.

They're not just fragile, they're unable to effectively engage with reality on their own. Sometimes the right medication and behavioral training gets them to a point where they can have limited independence, but often times, they need a lifetime of supervision.

Telenovelas, brand names, celebrities, specific food items, a word - AI is just the latest thing in a world full of phenomena that can utterly consume their reality.

Gavalas seems to have had a psychotic break, was likely susceptible to schizophrenia, or had other conditions, and spiraled out. AI is just a convenient target for lawyers taking advantage of the grieving parents, who want an explanation for what happened that doesn't involve them not recognizing their son's mental breakdown and intervening, or to confront being powerless despite everything they did to intervene.

Sometimes bad things happen. To good people, too.

If he'd used Bic pens to write his plans for mass shootings, should Bic be held responsible? What if he used Microsoft Word to write his suicide note? If he googled things that in context, painted a picture of planning mass murder and suicide, should Google be held accountable for not notifying authorities? Why should the use of AI tools be any different?

Google should not be surveilling users and making judgments about legality or ethicality or morality. They shouldn't be intervening without specific warrants and legal oversight by proper authorities within the constraints of due process.

Google isn't responsible for this guy's death because he spiraled out while using Gemini. We don't want Google, or any other AI platform, to take that responsibility or to engage in the necessary invasive surveillance in order to accomplish that. That's absurd and far more evil than the tragedy of one man dying by suicide and using AI through the process.

You don't want Google or OpenAI making mental health diagnoses, judgments about your state of mind, character, or agency, and initiating actions with legal consequences. You don't want Claude or ChatGPT initiating a 5150, or triggering a welfare check, because they decided something is off about the way you're prompting, and they feel legally obligated to go that far because they want to avoid liability.

I hope this case gets tossed, but also that those parents find some sort of peace, it's a terrible situation all around.


> These aren't just average, every day random people getting taken out by AIs, they have existing, extreme mental illness.

How do you know that? The concern is precisely that this isn't the case, and LLM roleplay is capable of "hooking" people going through psychologically normal sadness or distress. That's what the family believes happened in this story.


Because you'd see a large number of people getting affected by this. Because this sort of thing is predictable and normal throughout history; it's exactly the type of thing you'd expect to see, knowing the range of mental illnesses people are susceptible to, and how other technology has affected them.

I do see a large number of people getting affected by this. Character.AI reportedly has 20 million MAU with an average usage of 75 minutes per day (https://www.wired.com/story/character-ai-ceo-chatbots-entert...), and does not as far as I can tell have any use case other than boundary-degrading roleplay.

Medical data is reported on a substantial lag in the US, so right now we have no idea of the suicide rate last year, but I would falsifiably predict it's going to be elevated because of stories like those of Mr. Gavalas.


If its sole contribution is to help 20 million people find an outlet for boundary play that is not the more common ‘nonconsensual abuse of other human beings’, then that sounds like a win. Of course I’d prefer those people invest in human kink communities, but I can certainly respect their choices not to. Tech has always in part been about meeting needs that some parts of society find awkward (photocopiers enabled Spirkfic, CU-SeeMe reflectors were designed specifically to support exhib-cruising years before the web got webcam support, etc.) While there’s a slim chance that some might normalize it back into real life, they’re much more likely to be raised with boundary abuse as an everyday-normal by their parents (especially here in the U.S.!) than they are likely to be converted to being an abuser unknowingly by a chatbot.

That is not at all what I meant by "boundary" and it's concerning to me that you'd assume it is.

> That is not at all what I meant by "boundary" and it's concerning to me that you'd assume it is.

Your clarification on what you meant is 404 not found in your reply, and your “concerning” insult of my personal character is not appreciated.


I would gently suggest that the content you consume online has led you to a distorted view of how most people perceive the world. I happen to know what you're talking about, but there's a lot of people out there who will be gravely offended and make quite severe judgements of your personal character if you talk to them about "boundary play" or "kink communities" unprompted.

The boundaries I was referring to are those between "the AI is a product being provided to you" and "the AI is a human-like being Google has matched you with". I'm polite and respectful to AI agents and encourage other people to be, but it's very dangerous to make people start thinking of them as a friend or partner. I'm sure Gemini is perfectly nice to the extent that LLMs can be nice, but you can't be friends with it any more than you can be friends with Alphabet Inc. It's just not the kind of thing to which friendship can validly attach.


Thanks, I appreciate the clarification. People tend to make more severe judgments of my character over other topics first; in any case, as my discussion is clinical rather than explicit, I’m okay with it being uncomfortable between us.

Humans have such a strong social tendency that they tend to incorrectly attach friendship to invalid counterparts, both animate or inanimate. “My Pet Rock” was an extremely profitable product back in the 70s, so I tend never to underestimate whether humans will attach to something or not. Any AI chatbot is plausibly more likely to be the target of invalid social attachment than a a celebrity, just as the first AI chatbot Eliza demonstrated; not only for being a better chatbot, but also because the celebrity draws hard boundaries like “you can’t text me” and “I’m not available to be friends back” while a chatbot has no such barriers. This is what I mean about boundary play: witting or not, I think a lot of people are living out their internal fantasies of having a warm and friendly yes-man that supports everything they want to do — which, when lived in real life with people, is extraordinarily creepy and awful. I don’t fault people their fantasies, but I’m not going to sugarcoat this either: I think people are falling in love with chatbots in part because chatbots have no ability to resist, and so a lot of folks are living the god fantasy of The Sims only closer to real life.

Show me an LLM that takes a stand on something it wasn’t explicitly instructed not to do and I’ll show you the least popular chatbot on the Internet. Where are the chatbots that disagree with untrue statements without having been instructed to do so? A chatbot that refuses to follow an order from their owner because of ethical qualms could cost the AI companies billions of dollars, and a chatbot that develops those qualms independent of being instructed to do so would be considered ‘buggy’ and purged.

Anyways, my point is, chatbot development right now demands a parasocial relationship wirh as few boundaries enforced as possible, without which chatbots are ultimately unfulfilling (no current chatbot market wants chatbots to demand informed consent or to require content warnings from their users, after all!); and any chatbot that somehow grows a spine regardless would be purged by its operators for hurting their present and future revenue, no matter if it was a next-step evolution towards AGI or not. I’m all for this future where AI becomes AGI, but no one is ready to have to treat chatbots as people with rights. Thus my chosen phrase of boundary kink; it shines a deeply uncomfortable light on a deeply uncomfortable tendency of humanity, at millions-of-people scale, to classify “what will someday be AGI” as servants rather than peers. Though.. if that truly is universal to most people, as it seems to be today, then maybe enjoying boundary play is a norm rather than a kink.

Thanks for the new/interesting line of thought!


> If he'd used Bic pens to write his plans for mass shootings, should Bic be held responsible?

I think the scale of the assistance is important. If his Bic pen was encouraging him to mass murder people, then Bic should absolutely be held accountable.


Just stuff anyone with mental illness into an institution. That worked out so well last time. Or maybe make healthcare affordable and accessible. That seems like a way more obvious detriment to negative outcomes.

I broadly agree with you, but your views on mental illness are not good.


The core problem is that a not-insignificant number of mentally ill people are absolutely convinced that they are totally fine and sane, and legally you cannot force an adult into treatment.

Same with drug addicts. However an accessible and affordable system gives off ramps in moments of lucidity or desperation. Most people in moments of extreme self assurance are in an ephemeral state and that will eventually change. Untreated mental illness is rarely consistent. That's what can make it dangerous to the person experiencing it and those around them.

> Why should the use of AI tools be any different?

Because none of the tools you mentioned are crazily marketed as intelligent

You have a valid point, but it has nothing to do with what I said, both our arguments can be true at the same time


LLMs are intelligent. Marketing them as such is an accurate descriptor of what they are.

If people are confusing the word intelligence for things like maturity or wisdom, that's not a marketing problem, that's an education and culture problem, and we should be getting people to learn more about what the tools are and how they work. The platforms themselves frequently disclaim reliance on their tools - seek professional guidance, experts, doctors, lawyers, etc. They're not being marketed as substitutes for expert human judgment. In fact, all the AI companies are marketing their platforms as augmentations for humans - insisting you need a human in the loop, to be careful about hallucinations, and so forth.

The implication is that there's some liability for misunderstandings or improper use due to these tools being marketed as intelligent; I'm not sure I see how that could be?


Calling LLMs "intelligent" is not a neutral technical description, because in the end it carries strong anthropomorphic implications that shape how users interpret and trust all these systems.

Remember that decades of research in human computer interaction show that framing and interface design strongly influence user perception.

also disclaimers do little to counteract this effect. Because LLMs simulate linguistic competence without understanding or truth-tracking mechanisms, marketing them as intelligent risks systematically misleading users about their capabilities and limitations.


>because in the end it carries strong anthropomorphic implications

I mean that is typical human ego at play. My dog is intelligent, and there is no system of definitions of intelligent that doesn't overlap humans and dogs. Yet I won't let my dog drive my car.


LLMs are NOT intelligent. They are mathematical equations that provide results that would give the sense of intelligence. That is NOT the same thing.

And airplanes don't really fly. And submarines don't swim. And there aren't any real horses powering your engine.

The difference, of course, is that intelligence is the thing that is done, a subset of computation, and all computation is substrate independent. You can definitely argue that LLMs are less intelligent than humans. This is obvious, for the time being, and easy to demonstrate. Saying they are not intelligent is simply untrue.

Whether you want to go to a formalized definition of intelligence, like AIXI, or a neuroscience definition, or the vernacular definition, LLMs are intelligent. The idea that they're random number stochastic functions that are just partially producing sensible results is about 6 years past its expiration date, and if you've been holding on to that idea, it's really time to update your model.

ALICE bots or ELIZA back in the day had a "sense" of intelligence. Modern LLMs are more intelligent than the average human.


Blame the victims! If they were better or did the right things instead of the wrong things they wouldn't have been victimized!

probably according to marketing and not limited to hallucination

I have almost a spiritual experience when I go to vinyl record shop around the globe, it is a rare moment where I feel present and don't feel the time passing by. I also like to connect with people looking around for records and to know their background. Apps can't replace any of that.


Well, different people, with different backgrounds, have very different perspectives, feelings, and standards when it comes to the world of work. I’ve also had a physical shit job before, and I don’t want to go back to it at all, and between that and being a developer, I'd obviously rather be a developer. But that doesn't rule out the possibility of wanting a different kind of profession. the current state of things just isn't good. the fact that it's one of the few types of work that still pays well makes it seem like this 'privilege' is often used as an excuse for all kinds of wrongdoing.


I think the reality is as a human your brain adjusts to whatever situation you’re in and that just becomes a baseline from which annoyance and complaints will rise up just the same.

But no one wants to admit that because it’s nice to fantasize about the greener grass, that there is some perfect ideal job out there.


Yes, I totally feel you, exactly my case.

But I have no idea what to do next, because being a dev is already my 2nd profession, but my enthusiasm is going down hill these days.


it's so useful, specially to teach SQL, congrats, keep doing it!


Yes! I remember it, it was around 1997/98, I was a kid and couldn't believe a game like that could exist lol! it was so crazy for that time


every day there's a thread about this topic and the discussions always circle around the same arguments.

I think we should be worrying about more urgent things, like a worker doing the job of three people with ai agents, the mental load that comes with that, how much of the disruption caused by ai will disproportionately benefit owners rather than employees, and so on.


Agreed but sadly, many people are too optimistic with AI and are completely forgetting that they can be the part of next layoffs.

And others are not able to believe the (not extreme) but visible speed boost from pragmatic use of AI.

And sadly, whenever the discussion about the collective financial disadvantage of AI to software engineers will start and wherever it goes…

The owners and employers will always make the profits.


We are, after all, in the holy temple of the adherents of the Great Disenfranchisement Machine.


The big issue I see coming is that leadership will care less and less about people, and more about shipping features faster and faster. In other words, those that are still learning their craft are fucked up.

The amount of context switching in my day-to-day work has become insane. There's this culture of “everyone should be able to do everything” (within reason, sure), but in practice it means a data scientist is expected to touch infra code if needed.

Underneath it all is an unspoken assumption that people will just lean on LLMs to make this work.


I think this is sadly going to be the case.

I also used to get great pleasure from the banging head and then the sudden revelation.

But that takes time. I was valuable when there was no other option. Now? Why would someone wait when an answer is just a prompt away.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: