Computer Chess History 1978 - 1993 - Sargon II and spinoffs
Kathe and Dan Sprachlen (background)

Computer Chess History 1978 - 1993 - Sargon II and spinoffs

Avatar of stamma1
| 0
In 1976 IM and computer scientist Julio Kaplan of Puerto Rico explained to a group of Berkeley Chess Players that preparing against computers would be like preparing for any opponent - study their strengths and weaknesses and prepare your openings and strategy accordingly.
Julio Kaplan, 1967 World Junior Champion


Claude Shannon
Claude Shannon, early pioneer of Artificial Intelligence

Claude Shannon in 1950 described the essence of how computers would play for some time, dividing the game into a static element and a dynamic element - what would later be called the static evaluation function and lookahead.

The static evaluation function

How would you evaluate a position if you weren't allowed to look ahead at all, trying to assign a number to a position that estimates the chances for either side to win - 0 if the position is even, positive if White stands better, and negative if Black stands better?

The lookahead
The program chooses the move that maximizes its evaluation function by looking at all moves to a certain depth. Some general tree-searching algorithms like alpha-beta pruning and iterative deepening were born in the chess setting.

Improvements were discovered like the "capture-quiescence extension" -  continuing the search in positions with a lot of tactics (checks and captures) until a quiet position is reached.The lookahead increased  with faster processors and refinements in search algorithms, described in books like Levy and Newborn's How Computers Play Chess.

1982 - 1988

The Sargon II Chess Program by Kathe and Dan Sprachlen
Around 1981 Kaplan recommended reading a listing of Sargon II written in 8080 assembler. The original  version won the first microcomputer chess tournament in 1978. 
This program was written by Kathe and Dan Sprachlen of Fidelity Electronics. The program had to be simple, since it ran on an 8-bit machine. It had a lookahead of 6 or 7 ply, and had a simple static evaluation function, to assign a number to a position. It  was too small and slow to try to form a plan.

As a first cut, count the material and give the score in units of Pawns, according to the usual evaluation - a minor piece is worth about 3, Rooks 5 and Queen 9.

Beyond that, a first attempt is to weight factors like board control - just count the number of moves for each side, giving more points the closer the move is to the center. So a move to the center might be worth 4 times as much as a move to a corner square, and twice as much as a move to an edge square like h4. If material was even, it basically played Monopoly, with the center squares Park Place and Boardwalk and depreciating property values leaving the center.This was the main positional factor in the program Sargon II - its 8080 assembly language stored this in a variable brdcl for "board control" (no, not bird call).

How would you play against this type of strategy. What is wrong with this picture?

The Fidelity Chess Challenger Series- 1987

Fidelity Chess Challenger - about $150 in 1987
These were dedicated hardware games from Fidelity Electronics running about the same algorithm as Sargon II. They were rated over 2100, but this was based on tactical consistency combined with strategic weakness. After learning their weakness, I learned to give it odds, even advancing to Queen Odds.
Recall the puzzle from the last post - this was the end of a game at queen odds from Fidelity Chess Challenger in 1987.

White to Play and Win (with only 1 variation)


For one thing, it follows a greedy strategy - maximize its evaluation function with a fairly short lookahead. A greedy strategy is fine if you look far enough ahead, but without enough search, it falls prey to the "horizon effect".

At first I noticed that it could be led down a garden path of greed, grabbing one pawn after another, each one farther from the center, when a knockout punch was right around the corner.

Then I found the pattern of castling on opposite sides, and storming the king with pawns. The program won't perceive any kind of race or form any plan, just sit there trying to have the largest number of legal moves. When the pawns and pieces reached the vicinity of its king, it didn't see the writing on the wall, and its evaluation function would suddenly erode very quickly to checkmate.

This is the game ending in the puzzle above:

I found this even worked with Rook odds, and queen odds as well.
The Fidelity Chess Challenger  was a learning machine that tried to back up to where it thought the mistake was to improve for the next game, but this was crude and it couldn't back up far enough to see the writing on the wall.

As we'll see in the next blog posts,  these weaknesses would allow desktop  programs to lose even at Rook odds for roughly the next 20 years, even after they won tournaments like the Harvard Cup, because players didn't appreciate that they could read a listing of the opponent's brain! 

The refinement of the static evaluation function was the basis of computer chess improvement for a long time including 1994 programs like Chessmaster 2000 and  The Socrates Chess Engine of Heuristic Software Systems in Berkeley (Kaplan's company) that won the 1994 Harvard Cup - human vs machine, and the 1993 ACM computer tournament, running on an IBM PC against large dedicated chess hardware opponents.

I write articles on remedial chess - what I learned about computer chess, how to play blindfold, how to improve at tactics.