As an article that was here recently claims, every verification you do in a chain increases the total time of your work by an order of magnitude. So, it's only work optimizing any productive task if you already removed most verifications.
Now, some people claim that you need to improve the reliability of your productive tasks so you can remove the verifications and be faster. Those people are, of course, a bunch of coward Luddites.
It's more like, the LLM "hallucinated" (I hate that term) and automatically posted the information to the forum. It sounds like the human didn't get a chance to reason about it. At least not the original human that asked the LLM for an answer
I’m not in AI, but what is happening is that it is building output from the long tail of its training data? Instead of branching down the more common probability paths, something in this interaction had it travel into the data wilderness?
So I asked AI to give it a good name, and it said “statistical wandering” or “logical improv”.
But funny enough the person who was responsible for setting up the bot will likely face no repercussions. In fact they will probably be rewarded for transitioning their team's workflows to AI.
This is getting off topic but they did not say the remote humans drive the cars. The cars always drive themselves, the remote humans provide guidance when the car is not confident in any of the decisions it could make. The humans define a new route or tell the car it's ok to proceed forward
Well said! My only qualm with this is saying you hope "it" has our interests at heart. "It" is a machine made by humans that work for corporations. I would correct your hope to, "I hope they have our interest at heart by then."
reply