If anything, we should be more ready to anthropomorphize a LLM over a dog or cat.
It's being trained and evaluated on how effectively it can complete anthropomorphic data, and at this point has been shown to model world models abstractly present in the training data.
The trick is - as humans - avoiding binary thinking on the topic and recognizing there's almost certainly a spectrum going on, where certain aspects of anthropomorphizing like attributing a subjective continuous experience is silly, but other aspects like the model having some kind of abstracted modeling of emotional states, motivators, or concepts of self - unquestionably things modeled in the majority of social media data fed into the models - is seriously evaluated and considered.
We're too caught up in either or thinking, where no one wants to be seen as a tinfoil hat figure around "it's alive" so the 'safe' stance is denying any anthropomorphizing up until those people are caught by surprise by research around the performance benefits of emotional language or jailbreaking with appeals to empathy, at which point all too often there's an anchoring bias where they deny the new information as long as possible (another anthropomorphic feature that seems to have made its way into LLMs btw).
Less than 10% of what I see people discuss about models today seems particularly enlightened, and the leading minds like Hinton who straight up say you can't autocomplete well without underlying knowledge get dismissed by those eager to distance themselves from any accusation of being too quick to anthropomorphize the thing trained and evaluated on extending anthropomorphic data...
It's being trained and evaluated on how effectively it can complete anthropomorphic data, and at this point has been shown to model world models abstractly present in the training data.
The trick is - as humans - avoiding binary thinking on the topic and recognizing there's almost certainly a spectrum going on, where certain aspects of anthropomorphizing like attributing a subjective continuous experience is silly, but other aspects like the model having some kind of abstracted modeling of emotional states, motivators, or concepts of self - unquestionably things modeled in the majority of social media data fed into the models - is seriously evaluated and considered.
We're too caught up in either or thinking, where no one wants to be seen as a tinfoil hat figure around "it's alive" so the 'safe' stance is denying any anthropomorphizing up until those people are caught by surprise by research around the performance benefits of emotional language or jailbreaking with appeals to empathy, at which point all too often there's an anchoring bias where they deny the new information as long as possible (another anthropomorphic feature that seems to have made its way into LLMs btw).
Less than 10% of what I see people discuss about models today seems particularly enlightened, and the leading minds like Hinton who straight up say you can't autocomplete well without underlying knowledge get dismissed by those eager to distance themselves from any accusation of being too quick to anthropomorphize the thing trained and evaluated on extending anthropomorphic data...
It's a ridiculous state of affairs.