Hacker Newsnew | past | comments | ask | show | jobs | submit | krupan's commentslogin

Very soon, and at this point I'm not sure even that would cure the delusions of the few who practically worship LLMs

If you have to do all that, then what's the point of the AI? I'm joking, but I'm afraid many others say the same thing 100% seriously

As an article that was here recently claims, every verification you do in a chain increases the total time of your work by an order of magnitude. So, it's only work optimizing any productive task if you already removed most verifications.

Now, some people claim that you need to improve the reliability of your productive tasks so you can remove the verifications and be faster. Those people are, of course, a bunch of coward Luddites.


At least pre-LLM automation was written by a careful human who's job was on the line, and was deterministic.

It's more like, the LLM "hallucinated" (I hate that term) and automatically posted the information to the forum. It sounds like the human didn't get a chance to reason about it. At least not the original human that asked the LLM for an answer

I’m not in AI, but what is happening is that it is building output from the long tail of its training data? Instead of branching down the more common probability paths, something in this interaction had it travel into the data wilderness?

So I asked AI to give it a good name, and it said “statistical wandering” or “logical improv”.


If you don't like hallucinate, try bullshit. [NB: bullshit is a technical term; see https://en.wikipedia.org/wiki/On_Bullshit]

https://www.psypost.org/scholars-ai-isnt-hallucinating-its-b...


That is my preferred term, but it seems to derail discussions that might have otherwise been productive (might...the hope I have)

Read TFA. It's not "Someone vibe coded too hard and leaked data"

I hit back, clicked the link again, and it let me through

"A human, however, might have done further testing and made a more complete judgment call before sharing the information"

Because a human would have been fired for posting something that incorrect and dangerous


But funny enough the person who was responsible for setting up the bot will likely face no repercussions. In fact they will probably be rewarded for transitioning their team's workflows to AI.

A machine doesn’t need food, leisure time, or vacations. It doesn’t care.

It also doesn’t care.


I mean, only if it leads to embarrassment right off the bat.

If there is a year or two between writing your security fuck up and it being discovered the likelihood of repercussions drops significantly.


This is getting off topic but they did not say the remote humans drive the cars. The cars always drive themselves, the remote humans provide guidance when the car is not confident in any of the decisions it could make. The humans define a new route or tell the car it's ok to proceed forward

Have you ever heard of an extrapolation like that being incorrect?

Well said! My only qualm with this is saying you hope "it" has our interests at heart. "It" is a machine made by humans that work for corporations. I would correct your hope to, "I hope they have our interest at heart by then."

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: