And it takes ~20 years to train a new brain so it can coherently answer questions about a wide variety of topics. Even worse, you can't even copy-paste it once you're done!
What we shouldn't is anthropomorphise it too much. While LLMs can express themselves and interact with us in natural language, their minds are very different from ours - they never learned by having an embodied self, and they can't continuously learn and adapt the way we do - once the conversation is over, it's like it never existed unless it's captured for a future training cycle.
Right now, their ability to learn is severely limited. And, yet, they outcompete us quite easily in a lot of different tasks.
Agreed. There are a hundred different kinds of information processing that go into a human-like mind, and we've kinda-sorta built one piece. And there are a lot of pieces that it would neither be sane nor useful to build (eg. internalized emotions), so we might not see an AI with all the pieces for a very long time ("never" is probably too much to hope for).
From a pure data amount point of view yes, but relatively little of that would seem to be relevant for our intellectual capacities. If GPT was a robot moving autonomously around in the world with full visual, auditory and tactile apparatus, it may be a bit different.
Hm, not sure how most of that data would be irrelevant, could you clarify? I think all of that data as well as interacting with the environment creates the level of knowledge and intelligence we have today.