HN is full of people saying ABCD should know better and honestly I thought the same, but when I look at almost all of my friends working in critical domains like as a judge or engineer or lawyer or even doctor, they seem to trust ChatGPT more or less blindly. People get defensive when I point out out to them that ChatGPT will make things up and it is widely know, and some even tell me it is the fault of "tech people" for not fixing it and they can't be expected to double check every chatgpt conversation. So I am very sure this problem is more prevalent than what we see and also that it is going to continue increasing.
Every single person, every one of them, that I have watched google something since AI overviews launched, will instantly reference the AI overview. And that model is some bottom-rung high volume model, not even gemini.
Yes and the world should be utopia and everyone should be happy and we all wish for world peace and yada yada yada. What you are saying is a vision of ideal world as it should be, but doesn't help anyone understand the real world problems.
You can't seriously compare the problem of world peace with the problem of exercising the most basic level of critical thinking w.r.t. LLM output after it has already proven itself unreliable. That's not a utopian dream, it's a level of prudence on par with not sticking a fork in an electrical socket.
You're seriously overestimating the average person's ability to understand what llms are.
Look at all the influences, streamers, podcasters constantly asking em things and taking it as fact - live.
Isn't the joe Rogan experience like the most watched podcast or something? Every episode I've ever stumbled upon he "fact checks" multiple things via their sponsor which is just an llm provider specialized on news.
People aren't good at statistics. If something is close enough to the truth enough times, and talks authoritively on everything with good English... Guess what, they're gonna trust it.
You don't need to know how an LLM works to realize "sometimes the magic ChatGPT box tells me wrong things". Even if you fully fall for the anthropomorphism, this only requires the same level of awareness as realizing that after the third or fourth thing your weird uncle tells you that turns out not to be true, maybe you shouldn't take him at his word.
If human psychology worked like that, lotteries wouldn't be a thing. Nor prayer. There wouldn't be horoscopes in newspapers, nor homeopathy.
One of the various oddities going on with LLMs in particular is them being trained with feedback from users having a chance to upvote or downvote responses, or A/B test which of two is "better". This naturally leads to things which are more convincing, though this only loosely correlates to "more correct".
No shit. Why do people in this thread keep telling me that people are stupid like that's a news flash to me? The fact remains that it is stupid, and especially for educated people like the laywers/doctors/etc mentioned upthread, it's sufficiently obvious stupidity that there's no excuse. Yes, I know, that describes a lot of other stupidity. Much of our history as a species is inexcusable.
Edit: though I should be clear: people demonstrably do often learn to discount obviously unreliable sources. Not all the time, but pretty often in the easily verifiable cases, especially where they don't have a major emotional stake.
You may demand that of yourself, but for others we must design around the fact that they are stupid. You do not have the power to change their stupidity, only your response to it.
I would happily bet that you too have fallen for this at least once. Unless you cut AI out of your life completely and do not interact with others.
AI output is like that COVID video of contamination, you almost can't avoid it unless you scrupulously check each and every thing that is presented as fact that you are exposed to. And absolutely nobody does that.
Pretty close. I only touched ChatGPT a couple times a few years ago, haven't used the others (on purpose at least. Google forces its Gemini summaries on me but I mostly avoid them, because, umm, see above.)
> and do not interact with others.
Most people I interact with are on the same page about AI. But I try to keep my critical thinking online anyway, like I always have. If someone tried to feed me AI slop, I would consider that person to have betrayed my trust and would, to put it gently, try to interact with them less.
That makes you an extremely rare exception. I use AI as a private tutor on various subjects that interest me to save time and not to have to watch hours of low content videos. But I've separated it out to the degree that I'm running that stuff on a separate laptop to make sure my work product is never going to be contaminated. I quite literally treat it as though it is radio active and should never touch the rest of what I do.
This answer really isn’t good enough. The providers can’t both aim to replace search and claim PhD level intelligence that will do all the jobs, but hide behind “it makes mistakes” in small print.
I think it's the fluency. Other tools fail visibly. A bad search result looks like a bad search result. A hallucinated quote reads exactly like a real one. There's no signal in the output itself that something is wrong. You have to go back to the source to check, and the whole point of using the tool was to not have to do that.
> almost all of my friends working in critical domains like as a judge or engineer or lawyer or even doctor, they seem to trust ChatGPT more or less blindly.
We do not live in a meritocracy, because society has no means to judge merit. We live in a society ruled by people who crammed before the tests, and who wrote the papers to agree with and flatter the teacher. Now they are the teachers (and bosses), and
1) expect to be flattered (and LLMs have been built as the ultimate flatterers),
2) feel that a good, ambitious student (or subordinate) will not question them and their work, but instead learn to conform to it, and
3) are not particularly interested in the quality of their work as such, but rather the acceptance of their work. In certain professions, such as judges, doctors, high-level lawyers and engineers, or politicians, they feel like (with good reason) that they can demand acceptance of their work, and punish those who don't accept it.
This position is what they worked so hard as young people for. They were not working to become the best at their jobs. They were working to get the most secure jobs. The most secure jobs are the ones that bad or lazy work doesn't endanger.
I think this is an issue with anyone who relies on any LLMs. But yeah I agree and have had similar issues where someone will get defensive because they just don't want to admit they(the LLM's response) were wrong. It's hard to tell someone in a "nice/nonchalant" way:
"It's fine, the LLM just lied to you, but hallucinations and making claims based off of assumptions is just something they do and always have done!"
People don't like to feel dumb, and they don't want to feel betrayed by the same tool that gave them incredible factually correct results that one time only to give them complete and utter bullshit(that sounded legitimate) another time.
Also, yeah it feels like its everywhere these days and isn't showing any signs of slowing down(visited my parents and my dads using siri to ask chatgpt stuff now - URGHHHH) and I really hope we're both wrong
>but when I look at almost all of my friends working in critical domains like as a judge or engineer or lawyer or even doctor, they seem to trust ChatGPT more or less blindly
That's why I lost trust and faith in people who end up in positions of doctor, lawyer or judge. When I was young I used to think they must be the smartest most high-IQ people in society, having read the most books and have the highest levels of critical thinking and debate skills ever. When in fact they were only good at memorizing and regurgitating the right information that the school required to pass the exam that gave them that prestigious title and that's it.
Now in my mid 30's when I talk to people from these professions at a beer, barbeque or any other casual gathering, I realize they're really not that sharp or well read or immune propaganda and misinformation, and anyone could be in their place if they put in the grind work at the right time. It's a miracle our society functions at all.