Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What we shouldn't is anthropomorphise it too much. While LLMs can express themselves and interact with us in natural language, their minds are very different from ours - they never learned by having an embodied self, and they can't continuously learn and adapt the way we do - once the conversation is over, it's like it never existed unless it's captured for a future training cycle.

Right now, their ability to learn is severely limited. And, yet, they outcompete us quite easily in a lot of different tasks.



Agreed. There are a hundred different kinds of information processing that go into a human-like mind, and we've kinda-sorta built one piece. And there are a lot of pieces that it would neither be sane nor useful to build (eg. internalized emotions), so we might not see an AI with all the pieces for a very long time ("never" is probably too much to hope for).


It's amusing that our first contact with a completely alien intelligence is with one of our own making.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: