Bad chess openings accorording to computers

Sort:
Avatar of Totally_Winsome
andypandy99 wrote:

Are there bad chess openings against the computers? I mean does it exists openings that are OK according to humans, but bad according o computers?

Kasparov has said "Anything that isn't e4 is a waste of time!"

Avatar of Totally_Winsome
ChePlaSsYer wrote:

Yes. Openings were you give material to get dynamic compensation (gambits)  are not the best openings to play against Computers. They will neutralize your counter-play and you will end up with less material.

Now, I will keep singing.

I wonder how

I wonder why

Yesterday you told me about your 1982 FIDE rating.

And all that I can see is just a patzer question

I'm turning my head up and down

I'm turning turning turning turning turning around

And all that I can see is just a patzer question

 

I think this is a very astute observation and I agree.  Gambits are gambles, they depend on being unexpected, take Blackburne Shilling for example, your opponent doesn't have to take the bait.

Avatar of WSama

Computers will often deem the Dutch Defense an inaccuracy. This is a good example because a lot of people unfamiliar with the opening would be inclined to agree with the computer. I've been playing the Dutch for some time now (occasionally) and I'm still not entirely convinced about it. But masters once championed it.

Lately I've been thinking along the same lines as @Pfren, regarding this matter. Great openings have received years of trials and development by often a handful of players. We're talking deep strategy and experience.

Computers tend to favour more basic moves. Computer opening choices tend to look drawish because they lack an in-depth strategy, they always look for immediate attacks and so forth.

Avatar of WSama

That's not to say computers are bad at openings, they're simply battling with the same issue we are.

There's playing by book theory and then there's playing off the top of your head. Theory in general, or preparation for that matter, will trump spontaneous play 75% of the time. Book theory? Even more so, because then we're talking multiple contributors or participants.

So you take a computer, give it a few minutes to figure out an opening it has no foreknowledge of, and it'll only be slightly better than you and I given the same task.

So I suppose the keyword is 'theory'.

Avatar of drmrboss
WSama wrote:

That's not to say computers are bad at openings, they're simply battling with the same issue we are.

There's playing by book theory and then there's playing off the top of your head. Theory in general, or preparation for that matter, will trump spontaneous play 75% of the time. Book theory? Even more so, because then we're talking multiple contributors or participants.

So you take a computer, give it a few minutes to figure out an opening it has no foreknowledge of, and it'll only be slightly better than you and I given the same task.

So I suppose the keyword is 'theory'.

Looks like you still stick with the knowledge from 20 years ago.

 

These days , computers are well advanced and many opening lines are extensively checked by computers for human errors.

 

Dont believe me? Just use your human opening book or database you have, and I will use 1 min of analysis by Stockfish development version 13 with NNUE in my 4 cores pc for every single move, At the end of 20 moves, let us see who is better?

Avatar of WSama
drmrboss wrote:
WSama wrote:

That's not to say computers are bad at openings, they're simply battling with the same issue we are.

There's playing by book theory and then there's playing off the top of your head. Theory in general, or preparation for that matter, will trump spontaneous play 75% of the time. Book theory? Even more so, because then we're talking multiple contributors or participants.

So you take a computer, give it a few minutes to figure out an opening it has no foreknowledge of, and it'll only be slightly better than you and I given the same task.

So I suppose the keyword is 'theory'.

Looks like you still stick with the knowledge from 20 years ago.

 

These days , computers are well advanced and many opening lines are extensively checked by computers for human errors.

 

Dont believe me? Just use your human opening book or database you have, and I will use 1 min of analysis by Stockfish development version 13 with NNUE in my 4 cores pc for every single move, At the end of 20 moves, let us see who is better?

Interesting enough. Leaving aside AI, are you saying that stockfish can author better continuations and strategy without any foreknowledge of the opening? Or better yet, are you saying that competitive players are now using stockfish opening lines rather than that of opening book theory?

I'd assume this correct of learning algorithms that are supposed to accumulate theory just like any other player. I didn't know stockfish was just as advanced. 

Avatar of WSama

Given adequate time, engines must find the best move. Doing so is easier in the endgame, but only gets more difficult or timely as we trackback toward the opening. Competitive players have often worked on specific openings for years on end, usually through trial and error. Can stockfish match that analysis in a matter of minutes without any tables or comparatives to work with?

It's possible.

Can no human defeat stockfish on its best settings? That's another interesting question.

Lately I've been feeling like chess has been solved. But perhaps it is a delusion of sorts, like how intuitive thinking can at times be at odds with science. But in my opinion chess is always a draw. Once certain threats and dangers have been seen to that's it - move the pieces as much as you will.

That's why I stated in another post some time ago that chess is about avoiding the draw and pushing on until amongst the opponents one of them errs.

Avatar of Ilovemybeechop

Sicilian O'kelly variation shows bad for Black but in practicality its really good. 

Avatar of drmrboss
WSama wrote:
drmrboss wrote:
WSama wrote:

That's not to say computers are bad at openings, they're simply battling with the same issue we are.

There's playing by book theory and then there's playing off the top of your head. Theory in general, or preparation for that matter, will trump spontaneous play 75% of the time. Book theory? Even more so, because then we're talking multiple contributors or participants.

So you take a computer, give it a few minutes to figure out an opening it has no foreknowledge of, and it'll only be slightly better than you and I given the same task.

So I suppose the keyword is 'theory'.

Looks like you still stick with the knowledge from 20 years ago.

 

These days , computers are well advanced and many opening lines are extensively checked by computers for human errors.

 

Dont believe me? Just use your human opening book or database you have, and I will use 1 min of analysis by Stockfish development version 13 with NNUE in my 4 cores pc for every single move, At the end of 20 moves, let us see who is better?

Interesting enough. Leaving aside AI, are you saying that stockfish can author better continuations and strategy without any foreknowledge of the opening? Or better yet, are you saying that competitive players are now using stockfish opening lines rather than that of opening book theory?

I'd assume this correct of learning algorithms that are supposed to accumulate theory just like any other player. I didn't know stockfish was just as advanced. 

Stockfish in 2020 is already Neural Network or AI now.  ( Stockfish NNUE)

You might be surprised what is the knowledge of these chess engines rating if you limit their millions of position search ( zero search)

Old Stockfish knowledge only ( zero search) rating is around 800-1000

Stockfish NNUE would probably be around 1000-1200(no test done yet)

 

Leela Chess Zero( zero search) is 2200+ ( you can find the bot in other site by searching Leela 1 node)

What would be the rating of a GM for pure knowledge test? ( 15 secs bullet almost zero search= ? may be 1500)

Thus being said traditional engines are comparable to human knowledge already and AI Neural Network are vastly superior in pure chess knowledge than a top GM.

 

Imagine those knowledge are superboosted by millions of position search then you see the move choice of today top engine play.  

Avatar of WSama
drmrboss wrote:
WSama wrote:
drmrboss wrote:
WSama wrote:

That's not to say computers are bad at openings, they're simply battling with the same issue we are.

There's playing by book theory and then there's playing off the top of your head. Theory in general, or preparation for that matter, will trump spontaneous play 75% of the time. Book theory? Even more so, because then we're talking multiple contributors or participants.

So you take a computer, give it a few minutes to figure out an opening it has no foreknowledge of, and it'll only be slightly better than you and I given the same task.

So I suppose the keyword is 'theory'.

Looks like you still stick with the knowledge from 20 years ago.

 

These days , computers are well advanced and many opening lines are extensively checked by computers for human errors.

 

Dont believe me? Just use your human opening book or database you have, and I will use 1 min of analysis by Stockfish development version 13 with NNUE in my 4 cores pc for every single move, At the end of 20 moves, let us see who is better?

Interesting enough. Leaving aside AI, are you saying that stockfish can author better continuations and strategy without any foreknowledge of the opening? Or better yet, are you saying that competitive players are now using stockfish opening lines rather than that of opening book theory?

I'd assume this correct of learning algorithms that are supposed to accumulate theory just like any other player. I didn't know stockfish was just as advanced. 

Stockfish in 2020 is already Neural Network or AI now.  ( Stockfish NNUE)

You might be surprised what is the knowledge of these chess engines rating if you limit their millions of position search ( zero search)

Old Stockfish knowledge only ( zero search) rating is around 800-1000

Stockfish NNUE would probably be around 1000-1200(no test done yet)

 

Leela Chess Zero( zero search) is 2200+ ( you can find the bot in other site by searching Leela 1 node)

What would be the rating of a GM for pure knowledge test? ( 15 secs bullet almost zero search= ? may be 1500)

Thus being said traditional engines are comparable to human knowledge already and AI Neural Network are vastly superior in pure chess knowledge than a top GM.

 

Imagine those knowledge are superboosted by millions of position search then you see the move choice of today top engine play.  

Nice. I wonder how far these neural network engines will go with chess. Their pattern recognition must be aeons ahead of human players.

Avatar of pfren

NNUE isn't useful for LTC games, like official Correspondence Chess.

It reduces the engine speed, plus it affects the way the engine caches in a negative way- so we keep it disabled when analysing ongoing CC games.

Avatar of DanGZ94

There are few openings that, according to engines, are considered terrible. The Dragon in the Sicilian defense and the french defense are two openings that engines just hate. In the Alpha Zero vs Stockfish Match, Alpha Zero won 155 matches,  859 matches ended in a draw, and Stockfish only won 6 times: two of them were when they forced Alpha Zero to play the Dragon and the French Defense. However, at a human level, they are obviously not obsolete at all.

Avatar of pfren
DanGZ94 wrote:

There are few openings that, according to engines, are considered terrible. The Dragon in the Sicilian defense and the french defense are two openings that engines just hate. In the Alpha Zero vs Stockfish Match, Alpha Zero won 155 matches,  859 matches ended in a draw, and Stockfish only won 6 times: two of them were when they forced Alpha Zero to play the Dragon and the French Defense. However, at a human level, they are obviously not obsolete at all.

 

- Both the Dragon and the French are 100% fine objectively, regardless of "level" and such, and

- The AlphaZero vs Stockfish match was an obvious scam.

Avatar of tygxc

#33
Carlsen also spoke dismissively about the French Defence. It will be interesting if Nepo plays French against Carlsen.

Avatar of darkunorthodox88

one thing to note, not all engines evaluate positions equally. When it comes to very closed positions,  they are some engines i trust more than others. For example, in very clogged french-like positions, i trust komodo more than stockfish. I have seen positions where both at very high depth, komodo would give 0.6 whereas stockfish would give 1.7, and humans would side with komodo. Stockfish overestimates certain things quite a bit (space, rook lifts, uncastled king safety etc).

Avatar of Optimissed

I just played this blitz game. To show just how bad computer analysis can be, I'll post it. The throwaway line at the end was "one player was winning but gave it away". I never thought I was losing. I mean, I know I missed one or two quicker wins but it was blitz and I've been having trouble losing on the clock. I even ran it through the analysis tool and it never gave black as worse than - 0.16. That's not losing! No-one is ever going to learn much through this chess.com analysis tool.