
Bughouse and AI
Artificial Intelligence (AI) has mastered nearly all of the games of complete information. Some of them, like checkers and othello, it has solved. Humans can no longer hold their own against the AI in Western chess, Japanese chess, Chinese chess, or even Go (Weiqi, which means the surrounding game), which is considered to the deepest game there is. Even Western chess variants like crazyhouse have been mastered by the machines.
Engines are increasingly "intelligent". The early chess engines relied a great deal on brute force calculation. The 2016 AlphaGo vs. Lee Sedol match was strange in that the machine appeared to have more positional strength than tactical skill. It fell into some elementary traps, but managed to win 4-1. An even stronger Go engine defeated world no.1. Ke Jie 3-0 in 2017, proving that the 2016 match was no fluke.
Currently bughouse engines are not stronger than the strongest humans. Bughouse seems to have escaped the withering gaze of the AI, because it is a four player variant, and relatively underexplored. It is nonetheless a game of complete information, and poses unique problems for the AI. It is deeper than any single board variant, and perhaps deeper than Go. But most importantly, it requires attention to the question of other agents. Any good bughouse player must ask themselves, what is my partner planning? A good bughouse player has to play not only for themselves, but for their partner.
What might a human vs. AI contest look like?
1) The machine and the human each play two boards each. This would probably be the easiest scenario for the machine to master. (Although the machines have a long way to go here, there is no telling how good they could get. AlphaZero got to 3600 elo strength (unprecedented in all of history) in four hours of studying regular chess by itself.) The machines are fast, and unencumbered by the complications of having a partner, whose ideas are distinct from its own. It could probably give the best bughouse players a good fight, if not defeat them in the same way it has defeated the greatest players of classical chess and Go.
2) The machine plays two boards, and "simuls" against two humans. This would be harder than the previous scenario for the machine, given a suitable human team.
3) A machine and a human both play on the higher board, and both the machine and the human player partner a standardized weaker engine--perhaps around 2000 elo. Since the test of a good bughouse player is their ability to play for the other board, this would be the most crucial scenario.
Thoughts?