Chess enthusiasts and computer engineers have attempted to build, with increasing degrees of seriousness and success, chess-playing machines since 1769. Motivations can essentially be consolidated into two: firstly, to build a machine to play chess with for solo entertainment, and secondly, to investigate chess as a problem which might provide some insight into human cognition. In this view, the history of computer chess is both a spectacular success and a virtually complete failure.

Chess-playing computers are available for negligible cost, and there are many programs (even the free GNU Chess, Amy, Pepito, Crafty, and more) that play a game that, with the aid of virtually any modern personal computer can defeat most master players under tournament conditions, while top commercial programs like Fritz have surpassed even world champion caliber players at blitz and short time controls.

However, to the surprise and disappointment of many, chess has taught us little about building machines that offer human-like intelligence, or indeed do anything except play excellent chess. For this reason, computer chess, (as with other games, like Scrabble) is no longer of great academic interest to researchers in artificial intelligence, and has largely been replaced by more intuitive games like igo as a testing paradigm. Chess-playing programs essentially explore huge numbers of potential future moves by both players and apply a relatively simple evaluation function to the positions that result where as a game like igo challenges programmers to consider conceptual approaches to play.

The brute-force methods are useless for most other problems artificial intelligence researchers have tackled, and are believed to be very different from how human chess players select their moves. In some strategy games, computers easily win every game, while in others they are regularly beaten even by amateurs.

Therefore, the fact that the best efforts of chess masters and computer engineers are as of 2003 so finely balanced should probably be viewed as an amusing quirk of fate rather than the profound comment on thought that many in the past, including some of the early theorists on machine intelligence, thought it to be.

Table of contents
1 Brute force vs. strategy
2 Brute force vs. humans
3 Endgame Databases
4 Computer chess implementation issues
5 Chronology of Computer Chess
6 Other chess software
7 Computer Chess theorists
8 The future of computer chess?
9 See also
10 External links

Brute force vs. strategy

In the early days of computer chess, there were two general schools of thought. The first camp took a "strategic AI" approach, estimating that examining all possible sequences of moves to any reasonable depth would be impractical due to the astronomical number of possibilities and nominal processing power. Instead of wasting processing power examining bad or trivial moves (and their extensions), they tried to make their programs discriminate between bad, trivial and good moves, recognize patterns or formulate and execute plans, much as humans do.

The second camp took a "brute force search" approach, examining as many positions as possible using the minimax algorithm with only the most basic evaluation function. A program might, for example, pay attention only to checkmate, which side has more pieces, and which side has more possible moves, without any attempt at more complicated positional judgement. In compensation, the program would be fast enough to look exhaustively at all positions to a certain depth within its allotted time.

Use of alpha-beta pruning combined with a number of search heuristics dramatically improved the performance of brute-force search algorithms. In modern times, the general consensus is that chess is theoretically a nearly-understood paradigm as an AI design goal and the Chinese game of go is now at the forefront of challenges to AI designers.

Ultimately, the brute force camp won out, in the sense that their programs simply played better chess. The game of chess is not conducive to inerrantly discriminating between obviously bad, trivial and good moves using a rigid set of rules. Traps are set and sprung by expert players who understand and master the many levels of depth and irony inherent to the game. Furthermore, technological advances by orders of magnitude in processing power have made the brute force approach far more incisive in recent years than was the case in the early years. The result is that a very solid, tactical AI player has been built which is errorless to the limits of its search depth and time. This has left the strategic AI approach universally recognized as obsolete. It turned out to produce better results, at least in the field of chess, to let computers do what they do best (i.e., calculate) rather than coax them into imitating human thought processes and knowledge.

Brute force vs. humans

On the other hand, it remained an open question whether any amount of brute force computation would ever be adequate to defeat the expertise of top humans. In 1968, IM David Levy made a famous bet that no chess computer would be able to beat him within ten years. He won his bet in 1978 by beating Chess 4.7 (the strongest computer at the time), but acknowledged then that it would not be long before he would be surpassed. It is well that Levy didn't renew his bet for another ten years, because in 1989 he was crushed by the computer Deep Thought in an exhibition match, and would probably have lost even to earlier, lesser computers.

Deep Thought, however, was still considerably below World Championship Level, as the then reigning world champion Gary Kasparov demonstrated in two sterling wins in 1989. It wasn't until a 1996 match with IBM's Deep Blue that Kasparov lost his first game to a computer at tournament time controls in Deep Blue - Kasparov, 1996, Game 1. This game was, in fact, the first time a reigning world champion had lost to a computer using regular time controls. However, Kasparov regrouped to win three and draw two of the remaining five games of the match, for a convincing victory.

In May 1997, an updated version of Deep Blue defeated Kasparov 3.5-2.5 in a return match. IBM keeps a web site of the event at http://www.chess.ibm.com. While not an official world championship, the outcome of the match is often taken to mean that the strongest player in the world is a computer. Such a claim is open to strong debate, as a truly fair human-machine match is difficult to arrange.

Firstly, in one of the games, Kasparov was not defeated over the board. He resigned in a technically drawn position, perhaps distraught that Deep Blue was playing such human-like moves. He angrily accused the Deep Blue team of feeding human input to the computer as the game was in progress. Thus this defeat should be reckoned more as a psychological loss than an indication of inferior chess ability.

Secondly, there are other players whose playing style is recognised as more effective against computer opponents. Kasparov's strength lies in overwhelming opponents tactically and psychologically, both of which play to the strength of computers, whereas Vladimir Kraminik, who recently defeated Kasparov in a World Championship match, is more content to defend a tenable position, probe for weaknesses, and accumulate small advantages.

Thirdly, it was impossible for Kasparov to prepare to play the machine as he would against a human opponent, as the computer's programming was adjusted between prior matches and the Kasparov match. That incarnation of Deep Blue had no tournament record before the match, whereas the Deep Blue team was able to study and prepare against hundreds of Kasparov's public games.

Finally, a six-game match was too short for Kasparov to adjust to any potential weaknesses of Deep Blue. One great advantage of humans over computers is adaptability. In all his previous encounters with computers Kasparov had finished more strongly than he began, and it is reasonable to suppose that he would have fared better in a twenty-four game match, the traditional length of World Championship matches.

IBM retired Deep Blue after the match and it has not played since. However, other "Man vs. Machine" matches continue to be played. In October 2002, Vladimir Kramnik and Deep Fritz competed in the eight-game Brains in Bahrain match, which ended in a draw. Kramnik won games 2 and 3 by "conventional" anti-computer tactics - play conservatively for a long-term advantage the computer is not able to see in its game tree search. Fritz, however, won game 5 after a severe blunder by Kramnik. Game 6 was described by the tournament commentators as "spectacular". Kramnik, in a better position in the early midgame, tried a spectacular piece sacrifice to achieve a strong tactical attack, a strategy known to be highly risky against computers who are at their strongest defending such attacks. True to form, Fritz found a watertight defence and Kramnik's attack petered out leaving him in a bad position. Kramnik resigned the game, believing the position lost. However, post-game human and computer analysis has shown that the Fritz program was unlikely to have been able to force a win and Kramnik effectively sacrificed a drawn position. The final two games were draws. Given the circumstances, most commentators still rate Kramnik the stronger player in the match.

In January 2003, Kasparov played Deep Junior, another chess computer program, in New York. The match ended 3-3.

Endgame Databases

Computers have been used to analyse some chess endgame positions completely. Such endgame databases are generated in advance using a form of retrograde analysis. Ken Thompson, perhaps better known as the key designer of the UNIX operating system, was a pioneer in this area. Over the years, other endgame database formats have being released including the Edward Tablebases, the De Koning Endgame Database (released in 2002) and the Nalimov Endgame Tablebases which is the current standard supported by most chess programs such as Fritz. All five-piece endings have now been analysed completely, and some of the six-piece endings are available. A computer using these databases will, upon reaching a position in them , be able to play perfectly, and immediately determine whether the position is a win, loss or draw. Knowledge of whether a position is a win,loss or draw is also helpful in advance since this can help the computer avoid or head towards such positions depending on the situation.

Endgame databases featured prominently in 1999, when Kasparov played an exhibition match on the Internet against the Rest of the World. A seven piece Queen and pawn endgame was reached with the World Team fighting to salvage a draw. Eugene Nalimov helped by generating the six piece ending tablebase where both sides had 2 Queens which was used heavily to aid analysis by both sides.

Computer chess implementation issues

Developers of chess-playing computer system must decide on a number of fundamental implementation issues. These include:

Implementors also need to decide if they will use endgame databases or other optimizations, and often implement common de facto chess standards.

Board Representations

One of the simplest ways to represent a board is to create an 8x8 two-dimensional array (or, equivalently, a 64 element one-dimensional array). Each array element would identify what piece or pawn occupied the given square, or alternatively, if the square is empty. A common encoding is to consider 0 as empty, positive as white, and negative as black, e.g., white knight +1, black knight -1, white bishop +2, and so on.

One problem with this approach is that knight moves must be carefully examined to make sure they do not go "off the board". One solution is to use a 10x10 array instead, with the outer edges filled with nonsense blocking pieces; this means that knight moves do not have to implement special edge-checking code.

However, another technique has gained favor in many programs: the bitboard. The bitboard appears to have been originally developed by the KAISSA team in the Soviet Union in the late 1960s. A bitboard is a 64-bit sequence of bits (0 or 1), which indicates the absence or presence (false or true) of some state about each place on the board. A board position can then be represented using a series of bitboards. For example, a series of bitboards for each piece type or pawn, for each side, can represent the board position. See Bitboard.

Additional information must be stored as well as piece and pawn location, such as move number, which side is to move, whether or not castling is possible (and on which side), en passant information, draw-related information, and so on.

Search Techniques

Computer chess programs consider chess moves as a game tree. In theory, they examine all moves, then all counter-moves to those moves, then all moves countering them, and so on, where each individual move by one player is called a "ply". This evaluation continues until it reaches a final "leaf" position which is evaluated.

A naive implementation of this approach, however, would play relatively poorly, so various methods have been devised to greatly speed the search for good moves.

For more information, see:

Leaf evaluation

For most chess positions, computers cannot look ahead to all final possible positions. Instead, they must look ahead a few ply and then evaluate the final board position. The algorithm that evaluates final board positions is termed the "evaluation function", and these algorithms are often vastly different between different chess programs.

Nearly all evaluation functions evaluate positions in units of pawns, and at the least they consider material value. Thus, they will count up the amount of material on the board for each side (where a pawn is worth 1 pawn, a knight is worth 3 pawns, a bishop is worth 3 pawns or so, a rook 5 pawns, and a queen 9 or 10 pawns). Evaluation functions take many other factors into account, however, such as pawn structure, the fact that doubled bishops are usually worth more, centralized pieces are worth more, and so on. The protection of kings is usually considered, as well as the phase of the game (opening, middle, or end game).

Using endgame databases

It is still unclear how much chess-playing programs benefit from using the current set of endgame databases. This is because most modern chess programs can play a majority of such positions well enough using their normal search algorithms and heuristics without using endgame databases as a crutch. It is suspected that as more endgame databases [for example six and seven piece tablebases] are generated and used, this situation will change.

Nalimov Endgame Tablebases, which use state of the art compression techniques, require 7.05 GB of hard disk space for all five-piece endings. It is estimated that to cover all the six-piece endings will require at least 1 Terabyte. Seven-piece tablebases are currently a long way off.

While Nalimov Endgame Tablebases handle En passant positions, they assume that castling is not possible. This is a minor flaw that is probably of interest only to endgame specialists.

More importantly, they do not take into account the fifty move rule. Nalimov Endgame Tablebases only store the shortest possible mate (in ply) for each position. However in certain rare positions, the stronger side cannot force win before running into the fifty move rule. A chess program that searches the database and obtains a value of mate in 85 will not be able to know in advance if such a position is actually a draw according to the fifty move rule, or if it is a win, because there will be a piece exchange or pawn move along the way. Various solutions including the addition of a "distance to zero" (DTZ) counter have been proposed to handle this problem, but none have been implemented yet.

Other optimizations

Many other optimizations can be used to make chess-playing programs stronger. For example, Transposition tables are used to record positions that have been previously evaluated, to save recalculation of them. Refutation tables record key moves that "refute" what appears to be a good move; these are typically tried first in variant positions (since a move that refutes one position is likely to refute another). Opening books aid computer programs by giving common openings that are considered good play (and/or good ways to counter poor openings).

Of course, faster hardware and additional processors can improve chess-playing program abilities, and some systems (such as Deep Blue) use specialized chess hardware instead of solely software implementations.

Standards

Computer chess programs usually support a number of common de facto standards. Nearly all of today's programs can read and write game moves as Portable Game Notation (PGN), and can read and write individual positions as Forsyth-Edwards Notation (FEN). Older chess programs often only understood long algebraic notation, but today users expect chess programs to understand standard algebraic chess notation.

Most computer chess programs are divided into an engine (which computes the best move given a current position) and a user interface. Most engines are separate programs from the user interface, and the two parts communicate to each other using a public communication protocol. The most popular protocol is the Xboard/Winboard Communication protocol. Another open alternate chess communication protocol is the Universal Chess Interface . By dividing chess programs into these two pieces, developers can write only the user interface, or only the engine, without needing to write both parts of the program.

Chronology of Computer Chess

  • 1769 - Baron Wolfgang de Kempelen builds the Automaton Chess-Player, in what becomes one of the greatest hoaxes of its period.
  • 1868 - Charles Hooper presented the Ajeeb automaton - which had a real chess player inside.
  • 1890 - Leonardo Torres y Quevedo builds a machine that could play King and Rook versus King endgames.
  • 1948 - Norbet Wiener's book Cybernetics describes how a chess program could be developed using a depth-limited minimax search with an evaluation function.
  • 1950 - John von Neumann develops a chess program (on a bishopless 6x6 board) on the MANIAC I.
  • 1951 - Alan Turing develops on paper the first program capable of playing a full game of chess.
  • 1952 - Dietrich Prinz develops a program that solves chess problems.
  • 1956 - Invention of alpha-beta search algorithm by John McCarthy
  • 1958 - NSS becomes the first chess program to use alpha-beta searching.
  • 1958 - First programs that could play a full game of chess were developed, one by Alex Bernstein and one by Russian programmers using a BESM.
  • 1967 - First computer chess match between the Kotok-McCarthy program and th e ITEP program developed at Moscow's Institute of Theoretical and Experimental Physiscs.
  • 1970 - First year of the ACM North American Computer Chess Championships
  • 1974 - First World Computer Chess Championship
  • 1977 - Establishment of the International Computer Chess Association.
  • 1980 - Chess 4.6 developed at Northwesten University running on a Control Data computer becomes the first chess computer to be a successful at a major chess tournament.
  • 1980 - Establishment of the Fredkin Prize.
  • 1982 - Ken Thompson's hardware chess player Belle earns a US master title.
  • 1987 - HITECH developed by Hans Berliner and Carl Ebeling wins a match against grandmaster Arnold Denker 3.5 - 0.5.
  • 1988 - Deep Thought shares first place with Tony Miles in the Software Toolworks Championship, ahead of a former world champion Mikhail Tal and several grand masters including Samuel Reshevsky, Walter Browne, Gruenfeld and Gurevich. It also defeats Grandmaster Bent Larsen, making it the first computer to beat a GM in a tournament. Its rating for performance in this tournament of 2745 (USCF scale) was the highest obtained by a computer player.
  • 1989 - Deep Thought loses two exhibition matches to Gary Kasparov the reigning world champion.
  • 1997 - Deep Blue wins against Gary Kasparov (see [1])

Other chess software

There are several other forms of chess-related computer software, including the following:

  • Chess game viewers allow players to view a pre-recorded game on a computer. Most chess-playing programs can be also used for this purpose, but some special-purpose software exists.
  • Chess instruction software is designed to teach chess.
  • Chess databases are systems which allow the searching of a large library of historical games.

Computer Chess theorists

Well-known computer chess theorists include:

The future of computer chess?

Many observers extrapolate that computers will consistently beat the best human players by perhaps 2010, and then go on to exceed their abilities to the point where a human vs. computer chess match would be as unfair as a human vs. automobile race. Others are unconvinced, saying that there are still deep strategic elements to chess that will resist brute-force computer searching.

See also

External links