Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> But it has no intuition.

On the contrary, by your own description:

> Something about the heuristics used to prune the vast search space can make it miss [these].

it does have a intuition. And in these cases, its intuition is wrong.



It's not clear it's helpful in understanding intelligence to simply redefine terms to be "whatever the computer happens to be doing". In this way it will always appear that a computer is intelligent; trivially.

Rather we should start with what known intelligent systems (ie., animals) are actually doing, and then compare.

I think it's extremely unlikely the mechanism of intuition in relevant animals (esp. humans) has anything to do with heuristics for pruning a search space.


How are you defining intelligence so that that animals are "known" to have it but computers aren't? And what do we know about what animals are actually doing?

I think it's very likely that intuition in animals will have similar mechanisms to the things that produce the same results in computer reasoning, since the problems are the same and so optimisation processes (whether evolutionary or designed) will likely converge on the same solutions.


Computers don't reason. Computation isn't a property of any physical system, it's a subjective property attributed by an observer.

For example, consider a circuit. Suppose we attribute TRUE to 3v, and FALSE to 0v. Then two simultaneous input of 3v, 3v giving 3v (and so on: 33->3, 03->0, 30->0, 00->0) can be said to compute AND.

But the same physical system is at the same time "computing" OR. Here we attribute TRUE to the 0 signal.

Any "computer" is actually simultaneously running many, quite radically different, programs. We only attribute one program to it in how we rig devices (screens, keyboards, etc.) to interpret signals to mean something device-relative. If you "plug a speaker in a gfx socket" the output isn't visual. It means something completely different, and the GPU is computing audio now.

So "computational" algorithms in being merely purely formal specifications do not describe any unique physical system, and only apply to physical systems in an observer-relative way. Computation is explanatorily bankrupt; it is only a game people play in rigging physical systems to behave as they wish.

Animals however have the intrinsic property of intelligence: it isn't observer-relative. We are, quite literally, the observer imparting meaning to our environment. Our brain states resolve to unique thoughts, not to "many radically different ones simultaneously". Gold isn't both gold AND something else, and neither is thinking.

The category of explanation for these abilities which are intrinsic to animals cannot be, and is not, computational. It's scientific. Involving characterising the properties of materials, and their causal effects.


> Any "computer" is actually simultaneously running many, quite radically different, programs. We only attribute one program to it in how we rig devices (screens, keyboards, etc.) to interpret signals to mean something device-relative. If you "plug a speaker in a gfx socket" the output isn't visual. It means something completely different, and the GPU is computing audio now.

I'm sure if you spliced an animal's nerves to different places, that would make them "mean" something different. In fact we know that this kind of thing happens in nature, e.g. the star-nosed mole's visual cortex is connected to the parts of its nose that sense touch rather than to its eyes.

> Animals however have the intrinsic property of intelligence: it isn't observer-relative. We are, quite literally, the observer imparting meaning to our environment. Our brain states resolve to unique thoughts, not to "many radically different ones simultaneously". Gold isn't both gold AND something else, and neither is thinking.

How do you know? What would you expect to be different if it was? Often two people do observe the same piece of animal behaviour but disagree about whether it shows the animal was thinking/intelligent.


> I'm sure if you spliced an animal's nerves to different places,

That's sort of right, yes. But if you put their nerves in the wrong devices, animals wouldn't be intelligent at all. And these nerves are themselves plastic, ie., they arent like a CPU with a limited set of physical operations; nerves are part of the body and grow.

So intelligence is really bodily. Intelligence is something that occurs because your brain-motor-body system is highly plastic (at the celluar and whole-body level) and is capable of adapting itself to its enviornment competently in ways that we call "intelligent".

It isnt any computation properties of the brain which do this (if we impart any, we cannot impart them uniquely anyway, ie., they cant explain much). Rather it's the plasticity of the body (including the brain) which provides "the right device" for intelligence.

> Often two people do observe the same piece

Well it is literally the case that I showed a CPU running a "program" to several alien races, none could tell me what program it was running. Each could provide an essentially infinite number of different answers, depending on how they interpret the electrical signal. A program is a purely formal thing that describes no physical system uniquely.

Whereas if I showed animals to these aliens, they could actually describe what processes constituted their intelligence.

And likewise, if i showed them Gold; they could tell it apart from Silver.

Insofar as a really-existing digital computer has anything akin to intelligence, its because of how its devices behave. Ie., if you showed aliens the LCD, then they could say "well the LCD output is particular, and is intereptable 'this way'.".

We are easily fooled by the devices we attach to CPUs into thinking that the CPU is "computing" the device output. It isnt. We have just created an illusion: the keyboard button says '1', '+', '2' and the LCD says '3' and we think the CPU has computed 3.

But, looking at the CPU itself, it has actually "computed" an infinite number of things. The '1' signal could be interpreted abitaribly.

My point here is that talking about "computation" is a deeply unhelpful way to understand anything. It's a profoundly non-explanatory category (it cannot explain any physical system; it is just a formal specific we follow).

Once we throw that away. We can see more clearly that what enables animal intelligence is the character of their "devices". The physical plasticity of their tissues (their growth and adaption), their motor skills, and so on.

It is possible to describe a tiny subset of intelligence ("reasoning") by a formal system; but that misses why animals can reason in the first place. You don't get the capacity for reasoning by just making electrons dance in an observer-relative way. That requires the observer to already have that capacity.

The capacity itself is a property of the physical system, "the animal".


> We are easily fooled by the devices we attach to CPUs into thinking that the CPU is "computing" the device output. It isnt. We have just created an illusion: the keyboard button says '1', '+', '2' and the LCD says '3' and we think the CPU has computed 3.

> But, looking at the CPU itself, it has actually "computed" an infinite number of things. The '1' signal could be interpreted abitaribly.

That's not really true. We could very easily connect up the CPU to flash a light 3 times, or use a speech synthesizer to say "three". The reason we call these systems "computers" is that we built them to compute like (human) computers, that is, to evaluate mathematical expressions. Understanding computation is useful, even essential, in understanding the behaviour of these systems, just as understanding language and grammar is useful in understanding the pattern of sounds that a person might make under particular circumstances, even though the relationship between particular sounds and particular meanings is arbitrary or at least underdetermined.


If computer chess has taught us anything is that "the problems are the same" does not translate to "so are the solutions".

To clarify, computers play chess by searching exhaustively large game trees. Humans, on the other hand, do not. "The problem is the same", but the solutions are not. So we have no reason to assume that "the problems are the same" means that "so are the solutions", for animals and machines, and we have evidence to the contrary.


> To clarify, computers play chess by searching exhaustively large game trees.

That's less and less true with more recent chess engines.

> So we have no reason to assume that "the problems are the same" means that "so are the solutions", for animals and machines

The optimal solutions are the same. Human chessplaying has not had millions of years of evolution go into it.


>> That's less and less true with more recent chess engines.

Search is still the basis of AI chess playing. It's either alpha-beta minimax or Monte Carlo Tree Search (used by Stockfish and Leela Chess, respectively).

Game-tree search algorithms like minimax and MCTS need an evaluation function to select the move that leads to the best board position and some modern engines train neural networks to estimate these evaluation functions. For example Leela Cheess is based on AlphaZero (I think) that popularised this approach and Stockfish has recently adopted it. But chess engines still use a good old game-tree search algorithm.

>> The optimal solutions are the same.

We don't know that. We have programs that can beat humans at chess, but whether they play optimally, or what constitutes "optimal" play in chess, that's difficult to say.


> Game-tree search algorithms like minimax and MCTS need an evaluation function to select the move that leads to the best board position and some modern engines train neural networks to estimate these evaluation functions. For example Leela Cheess is based on AlphaZero (I think) that popularised this approach and Stockfish has recently adopted it. But chess engines still use a good old game-tree search algorithm.

Humans will definitely consider lists of possible moves and countermoves and what kind of boardstates will result, if you define that as "search" then it seems fair to say that humans search too. A human searches less deeply and gains more of their playing ability from being good at evaluating the resulting positions - but as you say, that's also the direction that computer chess engines are moving in.


I don't know everything about how humans play chess and I don't think anyone knows, either. Maybe they do what you say, maybe they don't. Most of what they do is not open to our introspection so it's hard to know for sure.

Computers, as I said earler, play by searching "exhaustively". "Exhaustive" can mean different things and I apologies for introducing such vagueness, but let's say that computer players spend all their resources, during play, to search a game tree. Learning evaluation functions, or memorising good board positions during self-play is done at an earlier, training stage, but it is also a form of search. So both for training and playing it is not "less and less true" that "computers play chess by searching exhaustively large game trees".

I agree that it's a bit confusing because if you read press announcements by e.g. DeepMind after AlphaGo's win against Lee Sedol, you would indeed get the impression that the field is moving away from search, towards something magickal that can only be performed by deep neural networks and that obviates the need for a good, old-fashioned search. But this is just marketing, though of course very unfortuante such. In truth, search still reins supreme in game AI.

One point of confusion is what we call "search". There is one kind of search that is sometimes called "classical search" and that includes algorithms that search explicitly or implicitly defined search trees with nodes that represent search candidates. Minimax and MCTS are this kind of "classical search" as is e.g. Dijkstra's algorithm or binary search, etc. When I say that "search still reins supreme in game AI", I mean classical searh. In machine learning, under PAC-Learning assumptions, a "search" is anything that selects a generalisation of a set of examples, i.e. a hypothesis, from a set of candidate hypotheses (a "hypotheis search space", or sometimes just "hypothesis space"). So gradient optimisation-based methods, like neural networks, do also use search - they search for the model that minimises error on the training or testing set, etc.

For an early discussion of search in machine learning see Tom Mitchell's "Generalisation as search":

https://www.sciencedirect.com/science/article/pii/0004370282...

And for machine learning without search, stand by for my upcoming thesis :)


> I don't know everything about how humans play chess and I don't think anyone knows, either. Maybe they do what you say, maybe they don't. Most of what they do is not open to our introspection so it's hard to know for sure.

We don't know everything about how humans play chess, but as a chess player certainly at least some of the time a human player will consciously enumerate possible moves (or particular subsets of possible moves) and countermoves and think about the position after each one in turn. At least I do.

> Learning evaluation functions, or memorising good board positions during self-play is done at an earlier, training stage, but it is also a form of search.

If you define "search" this broadly then essentially any way of playing chess would constitute search?


>> It's not clear it's helpful in understanding intelligence to simply redefine terms to be "whatever the computer happens to be doing".

Famously warned against years ago by Drew McDermot:

However, in AI, our programs to a great degree are problems rather than solutions. If a researcher tries to write an "understanding" program, it isn't because he has thought of a better way of implementing this well-understood task, but because he thinks he can come closer to writing the first implementation. If he calls the main loop of his program "UNDERSTAND", he is (until proven innocent) merely begging the question. He may mislead a lot of people, most prominently himself, and enrage a lot of others.

Artificial Intelligence meets Natural Stupidity, Drew McDermot, MIT AI Lab, Cambridge, Mass, 1981.

http://www.cs.yorku.ca/~jarek/courses/ai/F11/naturalstupidit...


I'm not redefining anything; heuristics to decide what to focus on and how are what intuition is. That fact that it's implemented differently is irrelevant; by that logic LAPACK isn't doing linear algebra because it's not using jury-rigged neural networks augmented with pencil and paper.


The machine isn't using a heuristic. The machine is just a system of electric field oscillations.

A rock rolling down a hill isn't using the "geodesic heuristic" either.

We impart to physical systems we have designed our intentions in designing them: we interpret so-and-so field oscillation as a "heuristic". But this is an observer-relative designation.

By the same observer-relative gesture, the rock has its heuristics too.

Animals however actually have heuristics; ie., dynamical interior models of their environment which are analysed to prescribe action.

Animals in being present within environments and having genuine interior reasoning/imagination/intutition etc. existi in a different relationship to "the chess game".

They aren't like rocks just "following the geodesic" (they do that at the atomic level, sure). But relevant heuristics here are genuine interior processes which are dynamically attached to environments.


you have the _horizon effect_. You do something, but the repercussions are too far in the future to see what the effect will be. This can cause the machine (or human) to do bad things. For example, trying to postpone the inevitable loss of a Queen which is at the edge of the horizon by giving up a pawn: this "wins" 2 tempi, pushing the loss of the queen behind the horizon. the game moves on, the machine sees the loss of the Queen and this time can give up a Knight to push it behind of the horizon... you get the idea.

There's also the positive horizon effect: you do the right thing, but for the wrong reason fe because there's a refutation that you didn't see. However, the refutation is wrong, and a few forced moves later you do spot why.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: