Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

LLMs have a very hard time saying "I am useless in this situation", because they are explicitly trained to be a helpful assistant.

So instead of saying "I can't help you with this picture", the thing hallucinates something.

That is the expected behavior by now. Not hard to imagine at all.



No controls in the training data?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: