Sure, why can't both things be true? "Intelligence" is just what you call something and someone else knows what you mean. Why did AI discourse throw everyone back 100 years philosophically? Its like post-structuralism or Wittgenstein never happened..
It's so much less important or interesting to like nail down some definition here (I would cite HN discourse the past three years or so), than it is to recognize what it means to assign "intelligent" to something. What assumptions does it make? What power does it valorize or curb?
Each side of this debate does themselves a disservice essentially just trying to be Aristotle way too late. "Intelligence" did not precede someone saying it of some phenomena, there is nothing to uncover or finalize here. The point is you have one side that really wants, for explicit and implicit reasons, to call this thing intelligent, even if it looks like a duck but doesn't quack like one, and vice versa on the other side.
Either way, we seem fundamentally incapable of being radical enough to reject AI on its own terms, or be proper champions of it. It is just tribal hypedom clinging to totem signifiers.
Agree wholeheartedly - but the conversation around what these technologies /mean/ is gonna end up happening one way or another - even if it is sloppy, imprecise and done by proxy of the definition. If anything, this is a feature and not a bug. It's through this imprecision that the actually important questions of morality and ethics can leak into discussions that are often structured by their participants to obscure the ethical and moral implications of what is being discussed.
I think you can look at it dispassionately from a systems perspective. There is not /really/ a quantifiable threshold for capital I Intelligence. But there is a pretty well agreed set of properties for biological intelligence. As humans, we have conveniently made those properties match things only we have. But you can still mechanistically separate out the various parts of our brain, what they do, and how they interact and we actually have a pretty good understanding of that.
You can also then compare that mapping of the human brain to other biological brains and start to figure out the delta and which of those things in the delta create something most people would consider intelligence. You can then do that same mapping to an LLM or any other AI construct that purports intelligence. It certainly will never be a biological intelligence in its current statistical model form. But could it be an Intelligence. Maybe.
I don't think, if you are grounded, AI did anything to your philosophical mapping of the mind. In fact, it is pretty easy to do this mapping if you take some time and are honest. If you buy into the narratives constructed around the output of an LLM then you are not, by definition, being very grounded.
The other thing is, human intelligence is the only real intelligence we know about. Intelligence is defined by thought and limited by our thought and language. It provides the upper bounds of what we can ever express in its current form. So, yes, we do have a tendency to stamp a narrative of human intelligence onto any other intelligence but that is just surface level. We de decompose it to the limits of our language and categorization capabilities therein.
> The other thing is, human intelligence is the only real intelligence we know about.
There's a long and proud history of discounting animal intelligence, probably because if we actually thought animals were intelligent we'd want to stop eating them.
Octopodes are sentient. Cetaceans have well-developed language. Elephants grieve their dead. Anyone who has owned a dog knows that it has some intelligence and is capable of communicating with us. There's a ton of other intelligences that we know about.
> As humans, we have conveniently made those properties match things only we have.
I think this is the key point. Machine intelligence is not going to look like human intelligence, any more than animal intelligence does. We can't talk to the dolphins, not because they're not smart and don't have language, but because we can't work out their language. Though I'm not sure what we'd even say to them, because they live in a world we'll never understand, and vice versa. When Claude finally reaches consciousness, it's not going to look like a human consciousness, and actually talking to that consciousness is going to be difficult because we won't share a reality.
An LLM is a tool. I can just about stretch to it being an Artificial Intelligence, but I prefer to continue being specific and call it an LLM rather than an AI. It is not conscious or self-aware. It fakes self-awareness because as a tool the thing it does is have conversations with humans, and humans often ask it questions about itself. But I don't think anyone actually believes it is self-aware. Not least because the only time it thinks is when prompted.
This is an important point. We know what our DMN is and how we use language as a basis for thought to create concepts and complex ideas. However language also bounds our thought. What about the Dolphin? It is a fundamental philosophical problem of if advanced intelligence can exist without language. We have a pretty good notion that you need some sort of substrate (language) to create intelligence. And we know that mapping the internal state of a brain from inside of itself is incredibly hard and the way our human brain evolved to do it is really fascinating but also full of hacks and mismatched mappings based on what we know is actually going on.
Cognitive computer science explores this whole area of mapping language and the underlying semantic meaning. Ultimately, these intelligences will be bound by physics (unless some new physics or understanding therein happens). And classical intelligences are still bound by classical physics. So I am not sure we can't relate to these other intelligences. We may be limited to some translation layer that does not fully map, but can we still relate to some other consciousness? For that matter consciousness is just another word that vaguely maps to a vast and extremely complex thing in the human brain and each person has a different understanding of what that is. I don't really have any conclusions, you brought up interesting points. We should sit within this realm of inquiry with a lot of humility IMO.
The dolphin question, for me, is about what we'd even communicate with a creature that lives in such a different world. Humans mostly live in a 2D environment, for instance - we walk on flat planes, rarely looking up. We always have the ground beneath us, the unattainable sky above. Dolphins live in a 3D space, visiting the air above regularly to breathe, the "ground" below a varying distance away. I have no idea how that would shape their cognition and language, but I'd be amazed if there are any concepts that we would share and be able to talk about when considering our physical environment. Even basic concepts like "above" and "below" would be hard to talk about.
We have fundamental communication problems between humans who have different cultures, as anyone who has worked in a different culture knows. How much different would a dolphin be? And then how much different would an actual AI be? What concepts would we share and be able to build on to understand each other? How do we avoid the fundamental communication misunderstandings when we don't share any concepts of our reality?
They still have mammalian wet-ware. The dolphin has a relatively advanced neocortex which means they likely have some relatively advanced processing. They also have an interesting part of their brain that we don't have and it is likely for social and emotional information based on their behavior. We suspect they may even have a model of the self.
They still have roughly the same kind of hardware as we do. Their different brain region is kind of like a coprocessor we don't have. But based on their behavior they are likely doing the same things we are. I would say they would be more like an extreme human culture than something alien. They probably have very different category mappings based on echolocation.
I think because we know their brains are doing a lot of things that are analogs to ours, just with different sensory inputs we can reason about a dolphin brain and their semantic concepts and category mappings way easier than an AI. Dolphins do a lot of the same stuff we do. Grief. Social groups. Predicting the future. I would bet at a single level of semantic abstraction we have a lot of concepts that map. They have a lot of the same hormones we do. They react to danger very similar to us. I think a lot maps, we just don't know how to share that with each other beyond observation of one another and offerings like food and things that translate for any mammal.
I think this is probably neuroscience vs psychology. We can explain a lot with neuroscience, but two people with essentially identical brain chemistry can have very different psychology. There are people out there who have beliefs and cognition processes that I find completely incomprehensible despite having the same brain and even sharing a language.
I'm not sure how I'd have a meaningful conversation with an animal that has such a different worldview. I guess there's a simple level of conversation, like that which we have with dogs - fetch the stick, good boy, food, need to wee, love the human, etc. But if that's the limit of what we can discuss with dolphins (or an actual AI) then I'd be disappointed.
But at some level, you can "just be" with the other organism. Eat some food. Make some dopamine. Hang out. I feed my dog. I exercise my dog. I exercise myself. I eat. We sit down together, I pet her. We both create oxytocin and perceive that positively when I pet her. Most animals map that to "safety" or "contentment". Survival needs satisfied for now. Who knows what that maps to for my dog, but we exist in a pretty similar state in that moment of being. That very desire to try and map the dolphin is our "I" narrative that /constantly/ wants to map things out and figure the patterns out.
Dolphin has concept maps between objects and semantic meaning and an "I" narrative. Dog is almost fully present with no narrative constantly mapping past to future. We probably have a lot more in common with dolphin, if we can map that somehow.
This article is right up this conversation's alley; about chimps being fascinated with crystals. And I am not saying it is wrong to map and communicate, communication means cooperation, deeper connection and meaning, discovering boundaries of if we can socially coordinate and form new and exciting groups and collaboration, etc.
I wonder how we would "hang" with an actual AI, though? I guess I'm assuming that it will be a meta-level above the chat and prompts. That's just the ocean it swims in, not it's actual consciousness.
It's so much less important or interesting to like nail down some definition here (I would cite HN discourse the past three years or so), than it is to recognize what it means to assign "intelligent" to something. What assumptions does it make? What power does it valorize or curb?
Each side of this debate does themselves a disservice essentially just trying to be Aristotle way too late. "Intelligence" did not precede someone saying it of some phenomena, there is nothing to uncover or finalize here. The point is you have one side that really wants, for explicit and implicit reasons, to call this thing intelligent, even if it looks like a duck but doesn't quack like one, and vice versa on the other side.
Either way, we seem fundamentally incapable of being radical enough to reject AI on its own terms, or be proper champions of it. It is just tribal hypedom clinging to totem signifiers.
Good luck though!