If computer chess has taught us anything is that "the problems are the same" does not translate to "so are the solutions".
To clarify, computers play chess by searching exhaustively large game trees. Humans, on the other hand, do not. "The problem is the same", but the solutions are not. So we have no reason to assume that "the problems are the same" means that "so are the solutions", for animals and machines, and we have evidence to the contrary.
>> That's less and less true with more recent chess engines.
Search is still the basis of AI chess playing. It's either alpha-beta minimax or Monte Carlo Tree Search (used by Stockfish and Leela Chess, respectively).
Game-tree search algorithms like minimax and MCTS need an evaluation function to select the move that leads to the best board position and some modern engines train neural networks to estimate these evaluation functions. For example Leela Cheess is based on AlphaZero (I think) that popularised this approach and Stockfish has recently adopted it. But chess engines still use a good old game-tree search algorithm.
>> The optimal solutions are the same.
We don't know that. We have programs that can beat humans at chess, but whether they play optimally, or what constitutes "optimal" play in chess, that's difficult to say.
> Game-tree search algorithms like minimax and MCTS need an evaluation function to select the move that leads to the best board position and some modern engines train neural networks to estimate these evaluation functions. For example Leela Cheess is based on AlphaZero (I think) that popularised this approach and Stockfish has recently adopted it. But chess engines still use a good old game-tree search algorithm.
Humans will definitely consider lists of possible moves and countermoves and what kind of boardstates will result, if you define that as "search" then it seems fair to say that humans search too. A human searches less deeply and gains more of their playing ability from being good at evaluating the resulting positions - but as you say, that's also the direction that computer chess engines are moving in.
I don't know everything about how humans play chess and I don't think anyone knows, either. Maybe they do what you say, maybe they don't. Most of what they do is not open to our introspection so it's hard to know for sure.
Computers, as I said earler, play by searching "exhaustively". "Exhaustive" can mean different things and I apologies for introducing such vagueness, but let's say that computer players spend all their resources, during play, to search a game tree. Learning evaluation functions, or memorising good board positions during self-play is done at an earlier, training stage, but it is also a form of search. So both for training and playing it is not "less and less true" that "computers play chess by searching exhaustively large game trees".
I agree that it's a bit confusing because if you read press announcements by e.g. DeepMind after AlphaGo's win against Lee Sedol, you would indeed get the impression that the field is moving away from search, towards something magickal that can only be performed by deep neural networks and that obviates the need for a good, old-fashioned search. But this is just marketing, though of course very unfortuante such. In truth, search still reins supreme in game AI.
One point of confusion is what we call "search". There is one kind of search that is sometimes called "classical search" and that includes algorithms that search explicitly or implicitly defined search trees with nodes that represent search candidates. Minimax and MCTS are this kind of "classical search" as is e.g. Dijkstra's algorithm or binary search, etc. When I say that "search still reins supreme in game AI", I mean classical searh. In machine learning, under PAC-Learning assumptions, a "search" is anything that selects a generalisation of a set of examples, i.e. a hypothesis, from a set of candidate hypotheses (a "hypotheis search space", or sometimes just "hypothesis space"). So gradient optimisation-based methods, like neural networks, do also use search - they search for the model that minimises error on the training or testing set, etc.
For an early discussion of search in machine learning see Tom Mitchell's "Generalisation as search":
> I don't know everything about how humans play chess and I don't think anyone knows, either. Maybe they do what you say, maybe they don't. Most of what they do is not open to our introspection so it's hard to know for sure.
We don't know everything about how humans play chess, but as a chess player certainly at least some of the time a human player will consciously enumerate possible moves (or particular subsets of possible moves) and countermoves and think about the position after each one in turn. At least I do.
> Learning evaluation functions, or memorising good board positions during self-play is done at an earlier, training stage, but it is also a form of search.
If you define "search" this broadly then essentially any way of playing chess would constitute search?
To clarify, computers play chess by searching exhaustively large game trees. Humans, on the other hand, do not. "The problem is the same", but the solutions are not. So we have no reason to assume that "the problems are the same" means that "so are the solutions", for animals and machines, and we have evidence to the contrary.