Humankind has long concerned themselves with being right. Humans bicker, disagree, and start wars over simple disagreements.
Chess is no different. While perhaps created to prevent war and death, chess opposes two foes in pursuit of victory over a simple board with 32 pieces. This victory is not easily achieved, despite centuries of chess strategy and analysis. Much akin to the tools we have used to rise to the top of the world, we invented computers and artificial intelligence (AI) to help understand chess to an inhumane extent. In other words, man created computer programs capable of overcoming an opponent without the slightest of error. One of these programs, Stockfish, uses a complex set of rules to evaluate any chess position and calculate a winning strategy. Today, the strongest human chess players, called grandmasters, are in a losing war against their own invention.
However, chess hasn’t always been this way. These recent revolutions in chess were built off centuries of human rigor. The first chess engine was invented in the mid-1700s by French illusionist, Francois Pelletier (1a). The Turk, a clumsy, theatrical robot built of wood, robes, and a turban, successfully triumphed against his opponents. Turk was a novelty, and many questioned the viability of such an invention. Unfortunately, this success was a mere hoax, for inside one of the wooden cabinets under the chess board hid a human competitor, controlling each of Turk’s moves. While the defeats displayed mere human triumph, the dream of an intelligent, man-made competitor lay in wait. Yet, it wasn’t for another 150 years until a true chess engine was created.
The creation of computers reawakened the dream. Computers are composed of algorithms that define the way a system works, based off the steps that it takes to get from A to B. These algorithms portray a system as a “tried and done process”. A simple example of this is a cake recipe. It took trial and error – a few wasted eggs here, a little too much milk there, but eventually a delightful cake was made. By developing a recipe on how the cake was baked, this success can be repeated.
The power of algorithms is simple. Repeatable results display the success of a recipe in vivid detail. Like refining a cake recipe, computers were made with trial and error, in algorithmic fashion (albeit much more complex). Alan Turing championed this invention, creating complicated recipes on how to bake a computer, how to use that computer to make an engine, and how that engine can be used to play chess (1a). Unfortunately for Turing, this came a few decades too early. The technology of Turing’s age couldn’t compete with his complex algorithmic work. All his work on chess engines was tethered to paper-and-pencil, yet still cemented in history. While Turing would fail to see his success before his unfortunate death in 1954, his miraculous invention, a man-made competitor, was incubating.
From the 1950s to 1980s, computers continued to evolve. They proliferated to businesses, universities, libraries, and anywhere else a calculator was needed. Capable of solving complex calculations with ease and precision, computers had exposed the value of repeatable results. Even so, the black screen staring back at man had yet to prove dominion in chess. Mankind still had a chance.
Despite the rapidly developing world of computer sciences, most grandmasters didn’t believe that computers could ever outplay them. Gary Kasparov, the world chess champion in the 1980s, personally believed that computers were unable to succeed him. It was during this time that Kasparov played “Turbochamp”, Turing’s chess program. At long last, computers could play out the mechanical competitor. The world’s first machine, capable of making its own moves, was set to spar against the world’s strongest.
Kasparov beat “Turbochamp” in just 16 moves (2a). Nevertheless, Kasparov applauded Turing for his invention.
As computers evolved, so did the world of chess engines. Following the 80s, they would evolve rapidly, doubling efficiency about every two years. The 1990s brought the fall of the greatest chess master to the machine. The engine at the time, Deep Blue, defeated Kasparov in a “man vs. machine” tournament (3a). This was the first loss of the world’s strongest to a chess engine. It marked a critical turning point for both humanity and computers alike. The machines still had a long way to go, having won just one game against Kasparov, however engines continued their growth at an exponential rate.
The golden age of computer chess engines was close. The ultimate chess engine, capable of beating any contender, man or machine, was on the horizon. Programmers worked effortlessly, pushing out engines and programs at an unforeseen speed. One such engine invented at this time was Stockfish, a young yet courageous warrior.
Stockfish 1.0 was released in 2008 as an open-source chess engine. Built as an improvement on another chess engine, Glaurung 2.1, it quickly grew in strength and fame in the chess community (4a). As computers improved, so did Stockfish, and iteration after iteration was released for public access. In 2013, Stockfish 3 was delivered along with Fishtest (5a), a key companion in the youthful engine’s success. Once released, Stockfish 3 would soon grow explosively in strength, propelling it to the top of the chess rating system.
Fishtest is known as a “distributed testing framework”. Rather than making improvements to its game on its own, volunteers could donate their own computer’s processing power to help improve Stockfish. This “distributed testing framework” allowed for much more rapid growth, which was evident in its explosive growth. Thanks to Fishtest, Stockfish has played and learned from over a billion chess games.
Maintaining exponential growth in 2013, Stockfish was then rereleased as Stockfish 5, 6, 7 and onwards. It soon proved its dominion in Top Chess Engine Championships – a machine-only championship for the world’s strongest engine, claiming first place. An exponential growth in character has led Stockfish to the strength it carries with it today. As a chess engine, it has long surpassed the strength of our strongest players, demonstrating extreme strength, skill, and precision that we have the scantest hope to emulate. We lay in the wake of our own creation, and in awe of its predictions.
The most important change to Stockfish’s complex coding was released with Stockfish 12. This iteration includes a “neural network improvement,” and wins 10 times as frequently against Stockfish 11 (6a). However, we modeled neural networks after ourselves. Neural networks are a model of what we call familiar – the human brain. Neural networks allow computer programs to make decisions like us, allowing the computer to “identify phenomena, weigh options, and arrive at conclusions.” (7a) That is, neural networks allow chess engines to assess and evaluate a situation, as opposed to just playing it repeatedly until it finds the right answer.
What this powerful tool must teach us lies within its accurate gameplay. With an availability to always play the right move, or a move that leads to the best chances of success, chess engines reveal the best move in any given game. The accuracy is far beyond human capability, but it has rapidly expanded modern chess theory and opening strategies. Like our brains, the neural networks built within Stockfish 12 have further contributed to the already expansive knowledge that Stockfish has provided us.
Chess is not easy. It requires strategy, intelligence, skill, and accuracy. Indeed, within our own universe lies a vastness of atoms just short of the number of possible moves in a chess game, a figure known as the Shannon number (8a). The 64-tile, checkered board of 32 pieces holds greater possibility than the space beyond and within Earth. To understand what constitutes a “good move” and what constitutes the “best move” is near impossible – it is a magnitude far beyond our comprehension. We are mere mortals staring at a near-infinite expanse.
Man’s strongest competitors, our grandmasters, have dedicated decades of their lives studying the game. Our strongest can often determine the best move in each situation, yet when played over a game of 30+ moves, we find ourselves lost in this accuracy. The human limit has been reached as far as competing with chess engines. The difference between man and machine is best represented by analyzing Elo rating scores.
To determine the strength of a player, the U.S. Chess Federation and the International Chess Federation (FIDE) use an Elo rating system, named after its creator, Arpad Elo. This Elo system represents a player’s performance in previous games. Thus, the highest Elo is used as a determinant for the strongest player (9a).
Currently, the world’s highest Elo rating belongs to Magnus Carlsen, former world champion. Carlsen’s Elo rating? At its peak, 2882 (10a). This is an impressive cumulative feat, surpassing Gary Kasparov’s score of 2852 in 1999. The world’s number two, Hikaru Nakamura, held an impressive 2816. An outstandingly close race, yet our grandmasters have yet to reach 2900. To contrast, however, Stockfish has long succeeded these scores, and currently holds the world’s highest Elo rating by machine.
As a rule of thumb, a player who is rated 100 points higher than their opponent is expected to win roughly 5 in 8 games (64%) (9a). The 30-point difference between Kasparov and Carlsen shows a significant strength, and the near 70-point difference between Nakamura and Carlsen demonstrates why Carlsen has prevailed so long as the world’s best. It is difficult to imagine a world where a human has long succeeded those numbers.
For computers, however, the difference in difficulty is boundless. As of July 2024, Stockfish was rated at a formidable 3641 – overshadowing man’s strongest champion by 752 points (11a). It makes sense, then, why grandmasters are terrified of the future of AI.
Humans can no longer say they are adversaries to computers. We have long lost that war. Rather, we should view chess engines as a friend, a calculator to help us when we’ve stepped off path. Stockfish, in all its strength, is currently used by popular chess websites like chess.com or lichess.org to analyze individual games. This analysis provides important insight into any one game. Analysis shows us our own inaccuracies. Analysis is used by grandmasters, and most contemporary chess players, to improve their own game. Analysis is provided by Stockfish.
While we may be unable to compete with AI, we can learn from it. With our limited cognition, we blunder from day-to-day. The belief that we must prove ourselves right rather than realize and accept the truth epitomizes this error in human thinking. Our intuition and unpredictability are what has modernized the game of chess, not our accurate play. As we stand opposite a tool of our creation, we must realize an evident truth – human thought will always hold the potential for error. The tools of artificial intelligence can aid humanity to a new dawn in precision. So long as we accept our blunders, our machines can guide us with a steady hand.

Resources:
1a https://chessentials.com/history-of-chess-computer-engines/
2a https://www.chessgames.com/perl/chessgame?gid=1670503
3a https://conversationswithtyler.com/episodes/garry-kasparov/
4a https://www.talkchess.com/forum/viewtopic.php?t=24675
6a https://stockfishchess.org/blog/2020/stockfish-12/
7a https://www.ibm.com/topics/neural-networks
10a https://www.techopedia.com/magnus-carlsen-how-intuition-and-ai-shape-the-best-chess-player-in-the-world 11a https://computerchess.org.uk/ccrl/4040/index.html