It is worth noting that that problem is only understood by humans (as well as it is) after _centuries_ of study by many professional players. My understanding is that even very strong human players must study that problem for days/months/years to really understand it well.
So katago being unable to handle it without special training doesn't seem _quite_ as blind of a blindspot as the chess examples from the article seem to be (I suck at chess and I was able to understand one or two).
I'm not trying to undermine you mentioning this, in case it comes off like that, on the contrary I think the comparison is quite interesting. I'm curious if this is just a difference in go vs chess, or in the relative abilities of specific kinds of AIs to handle these, or maybe just differences in human ability to craft and/or understand different problem difficulties between the games.
So katago being unable to handle it without special training doesn't seem _quite_ as blind of a blindspot as the chess examples from the article seem to be (I suck at chess and I was able to understand one or two).
I'm not trying to undermine you mentioning this, in case it comes off like that, on the contrary I think the comparison is quite interesting. I'm curious if this is just a difference in go vs chess, or in the relative abilities of specific kinds of AIs to handle these, or maybe just differences in human ability to craft and/or understand different problem difficulties between the games.