What are the practical uses of chess

The success of "stupidity"

Summary

The competition between the chess computer Deep Blue and the then world chess champion Garri Kasparov in 1997 was a media-effective staged spectacle. In addition, like other games, the game of chess was a test field for artificial intelligence research. The Deep Blues victory was described as a "milestone" in AI research on the one hand, and a "dead end" on the other, since the superiority of the chess computer is based on pure arithmetic and has nothing to do with "real AI".

The article asks about the premises of these different interpretations and places Deep Blue and his way of playing chess in the history of AI. This also requires the analysis of the underlying concepts of thinking. Finally, the article advocates starting from different ways of thinking between humans and computers and instead of fundamental discussions about the concept of thinking, asking about the consequences of the human-machine division of labor.

Abstract

The competition between the chess computer Deep Blue and the former chess world champion Garri Kasparov in 1997 was a spectacle staged for the media. However, the chess game, like other games, was also a test field for artificial intelligence research. On the one hand Deep Blue’s victory was called a “milestone” for AI research, on the other hand, a dead end, since the superiority of the chess computer was based on pure computing power and had nothing to do with “real” AI.

The article questions the premises of these different interpretations and maps Deep Blue and its way of playing chess into the history of AI. This also requires an analysis of the underlying concepts of thinking. Finally, the essay calls for assuming different “ways of thinking” for man and computer. Instead of fundamental discussions of concepts of thinking, we should ask about the consequences of the human-machine division of labor.

The chess-playing IBM computer Deep Blue is undoubtedly one of the most famous machines of the 20th century. His victory against the then reigning and long-time world chess champion Garri Kasparov in 1997 was a widely perceived and debated event and at the same time a skilfully staged advertising coup for the IBM company. The competition was recorded in popular media and, for example, in a documentary staged as a thriller with the title Game over processed. Chess-playing computers have also been main characters in other films, including HAL in Stanley Kubrick's film Odyssey 2001.Footnote 1 The Deep Blues victory also sparked tremendous media coverage. This is not surprising, after all, playing chess was considered a genuinely human achievement and evidence of intelligence. The classic narrative "man versus machine" could be staged here particularly easily and spectacularly. Even before the games against Garri Kasparov, there had been a series of public competitions “man versus computer”. In the 1970s there was an abundance of games between humans and computers (cf. Levy 1976: 113–129); Championships between humans and computers were held in Switzerland.Footnote 2 In 1979 a competition between the then leading chess program Chess 4.8 and the international master David Levy was shown on the Second German Television.Footnote 3 Levy had also made a bet in 1968 that he would not lose to a computer in the next ten years - a bet he won. And in 1979 he did not allow himself to be defeated by the chess program (Coy 1993: 212 f.).Footnote 4 Also in 1979 there was a simultaneous game between a chess computer and nine chess players in Hamburg, organized by the magazine The mirror.Footnote 5

The cultural scientist Hartmut Böhme described the competition between Kasparov and Deep Blue as silliness (Böhme 2009: 68). “Competitions” of this kind, as there have been and still exist in many cases outside of chess, are not simply silly. It is a separate genre that fulfills various functions within Artificial Intelligence (AI) research. On the one hand, it has a demonstrative function, namely as a high-profile advertising campaign that is intended to secure attention, demonstrate success and, ultimately, allow funding to flow. Second, the games are test fields on which paradigms of AI research are tried out. Their importance is therefore not reduced to the event-like, spectacular and effective in advertising; they are also an important part of research. Third, they make the results of AI research visible to a broader public and lead to intense debates.

The public discussion about the spectacle of Deep Blue revolved primarily around the question of the superiority of machines over humans. The chess computer aroused the classic fears of the dominance of the machine, of human powerlessness and of the dethroning of the human being as the only thinking being.

David Jay Bolter generally spoke of a “defining technology” in the context of the computer (Bolter 1984). He used it to refer to technologies that change the relationship between humans and nature, as he wrote in 1984, as well as the self-definitions of humans (Bolter 1984: 10). Metaphorically speaking, he asked how humans got into the computer in order to describe, albeit in a manner that tends to be culturally critical, "a change in the way men and women in the electronic age think about themselves and the world around them" (Bolter 1984 : 4). Artificial intelligence research in particular, with its endeavor to imitate human intelligence, touches on fundamental questions of the determination of being human.Footnote 6

However, this aspect will not be the focus in the following. Rather, the competition between Deep Blue and Kasparov is placed in the history of AI research. The contribution pursues two central concerns.

First, the most common standard narrative in AI history should be questioned and differentiated. This usually describes a change from a cognitivistic paradigm in the early phase of AI research to connectionism / neural networks and “behavior-based” or “embodied intelligence” approaches since the 1980s.Footnote 7 In the context of this reading of the history of AI, Deep Blue is interpreted as a dead end (e.g. Ensmenger 2012: 23), as a gimmick that has not advanced AI research. However, as was the case with the rest of Watson's victory in an American game show in 2011, the Deep Blues victory was described as a great success in AI research and a milestone in computer history (Schaeffer & van den Herik 2002: 3; Newborn 2003). This is presented in more detail below and the various arguments are analyzed in terms of their premises in order to gain a fresh look at AI history. The article also argues that the clear separation that can usually be found between theoretical interest, between basic research on the one hand and application on the other, i.e. between strong and weak AI, is often drawn too strictly.Footnote 8 The chess game in particular was, according to the thesis to be developed below, a field of AI research that was intended to provide fundamental insights into the human brain as well as practical application. This also reveals the pragmatism of many AI researchers who, as Noam Chomsky generally formulated with regard to research, “tend to study what you know how to study” (Chomsky 2012).

Finally, Deep Blue ignited social debates about whether machines can think and what thinking is, as was the case again with the victory of the computer Watson in an American game show in 2011. The standard arguments that have been found since the beginning of AI research were exchanged in sharp controversies. The second central concern of the post is to examine this debate, which leads to the core of AI history. This shows the historicity and polyvalence of the terms. On the one hand, the concept of thinking within the early AI stands in a long tradition of logical and calculating thinking, while critics interpreted this as an abbreviation. In addition, the debate repeatedly revolved around whether thinking is reserved for humans or whether what the computer does can also be described as thinking. This shows an anthropocentric defense of human thinking versus pragmatic engineering and programming logic, for which the result counts: for example, that a human chess world champion is defeated.

The paper traces this debate using the examples of Deep Blue, Watson, and Alpha Go. In this debate, a strict, categorical separation of human and machine intelligence can be observed, which resembles a basic discussion about concepts, but which, according to the thesis, misses the much more decisive questions about the consequences of machine intelligence, as will be explained in the last section.

The following first outlines the spectacular story of Deep Blue and the history of computer chess and its role in early AI research. Here the concept of thinking, as it dominated in early AI research, is historicized and its compatibility with computer logic is shown. Then it is necessary to trace the method with which Deep Blue played chess, the so-called brute force strategy, in order to be able to place Deep Blue in the history of AI research. This is followed by a look at the heated debate that followed Deep Blue as to whether the victory over the world chess champion had anything to do with thinking and whether it was “real AI”. Finally, the question of success or a dead end will be asked again, not least with a look at the IBM computers Watson and Alpha-Go, arguing for new perspectives on the history of AI research.

No historical study is yet available on Deep Blue. However, the victorious chess computer has often been the subject of various publications. Some of them are popular science books, often written by chess players. Participating actors also described how the chairman of the Computer Chess Committees for the Association of Computing Machines, Monty Newborn, or one of the creators of Deep Blue, Feng-Hsiung Hsu, share their perspective on the story. The same applies to the history of computer chess, which has hardly been dealt with. The computer historian Nathan Ensmenger made an important contribution with an essay to which we will come back later (Ensmenger 2012). In the 1990s, Wolfgang Coy also wrote an essay on computer chess (Coy 1993). Furthermore, there are again representations of computer scientists as well as general treatises on the history of the game of chess, in which computer chess is also taken into account, as well as popular scientific publications on the subject of computer chess, which were particularly produced in the 1970s and 1980s.Footnote 9 Just like the contemporary debate about Deep Blue and the question of machine thinking, these publications are used as sources here.Footnote 10

The story: Deep Blues victory over the world chess champion

The IBM Computer Deep Blue was the first chess computer to beat a world chess champion in a game in 1996 and finally in a tournament in 1997. In 1996 Kasparov had won the tournament against Deep Blue in Philadelphia, even if he had lost one game of the match. The 1997 tournament was the revenge for which Deep Blue had been considerably improved compared to the previous tournament (Campbell et al. 2002: 57). As one of the developers put it, it was no longer the same machine. On May 11, 1997, this revised Deep Blue finally won against the reigning world chess champion in a six-game match. Two games were won by Deep Blue, one by Kasparov, and three were drawn. The computer won the last of six games after only 19 moves (Goodman & Keene 1997: 100–106).Footnote 11

The match lasted a total of eight days and sparked massive media coverage. It had taken place in front of cameras in a television studio in New York. It was also broadcast on the Internet and in 2001 it was dubbed the “greatest Internet event of all time”. The organizers recorded more than 22 million hits on the tournament homepage every hour; accordingly, the site collapsed several times.Footnote 12 Numerous articles appeared in the media that interpreted the event in its significance for mankind, AI research and for the game of chess and in particular made the battle between man and machine the topic, in order to address people's age-old fears of the superiority of the machine to stir.Footnote 13

The victory over Kasparov was particularly spectacular because, firstly, AI research had been working on chess computers for decades and chess was one of the central paradigms of AI research. As Monty Newborn summed it up somewhat pathetically, Deep Blue was the result of the work of thousands of researchers who began working on chess computers in the late 1950s (Newborn 2000: 27). In 1957 Herbert Simon predicted that in ten years a computer would be world chess champion (Bruns 2000: 313). Beating a world chess champion was a prestigious goal of AI research that was finally achieved in 1997.

Second, Deep Blue's victory was spectacular because Garri Kasparov was seen as a particular challenge. He was an exceptional player who was described as invincible. From 1986 to 2005 he held first place in the world ranking of chess professionals (Ensmenger 2012: 22), and thus far longer than any other professional chess player to this day.

Kasparov's defeat was followed by a series of other games between grandmasters and chess computers. Kasparov immediately demanded revenge, but IBM refused (Hsu 2002: 261). Deep Blue was mothballed and later transferred to the Smithonian Museum in Washington. However, there were other games against chess computers by Kasparov and also by Vladimir Kramnik, who had replaced Kasparov as world chess champion in 2000, in 2002, 2003 and 2006. Most of the time there was a draw, and very rarely the human player won. In 2006 Kramnik lost significantly to Deep Fritz. The 2006 tournament is considered to be the end of the competitions between chess computers and humans and at the same time established the realization that the computer is now definitely and beyond the control of humans in chess.

A brief sketch of computer chess and the tradition of logical-formal thinking

A tabular overview of important events in the history of computer chess can be found in Franchi & Güzeldere (2005: 128 f.). In Levy & Newborn (1991) there is an overview of the history of computer chess, cf. also Coy (1993).

The idea of ​​delegating the game of chess to machines can be found as early as the 18th century. What is to be remembered is Wolfgang von Kempelen's “Chess Turks”, which he presented in 1769, which, however, turned out not to be a thinking machine, but rather a deception, as a chess master was hidden inside the machine. Nevertheless, Kempelen's machine toured as a much-noticed spectacle through Europe until the late 19th century. Edgar Allen Poe commented on the machine in a well-known essay (cf. Coy 1993). Charles Babbage had thought about the fact that the calculator he had developed Analytical engine Could play chess (Bell 1978: 12 f.). At the end of the 19th century, the Spanish electrical mechanic Leonardo Torres y Quevedo had developed a machine that at least managed a modest endgame (Pfleger & Weiner 1986: 15 f .; Shannon 1950).

However, computer chess only emerged with AI research in the middle of the 20th century. The history of computer chess is of great relevance to the history of AI research (Franchi & Güzeldere 2005: 46–58; Ensmenger 2012). Here the idea of ​​delegating chess games, arithmetic or thinking on machines was combined with the logic of the computer. In order to classify computer chess and deep blue historically, it is necessary to look back at the history of the concept of thinking.

As Barbara Becker had stated, "most modern theories and designs about the human mind were theoretical treatises" until "with the invention and spread of computer technology in the 1950s a new research perspective suddenly opened up" (Becker 1992: 94). At this point in time, chess became a central paradigm of the emerging AI research. Chess was considered the "Drosophila" of the research field (Franchi & Güzeldere 2005: 47; Ensmenger 2012: 5 f.)Footnote 14 So, in analogy to genetics, as a relatively simple model on which far-reaching theories, here about the human mind, the brain and intelligence, could be developed.

Research into games in general, not just chess, is considered the "longest running experiment in computing-science history". Games, especially chess, served as “experimental test beds for many areas of artificial intelligence” (Schaeffer & van den Herik 2002: 6).As Claude Shannon wrote in his essay “Programming a Computer for Playing Chess”: “Although perhaps of no practical importance, the question is of theoretical interest, and it is hoped that a satisfactory solution of this problem will act as a wedge in attacking other problems of a similar nature and of greater significance ”(Shannon 1950: 256).

The important role of chess here is explained firstly by the image of the game of chess as a game that is an indicator of intelligence. It was something only humans did (not animals), something to think about. If a computer can play chess and even beat human players, it is intelligent, so the assumption of many AI researchers. Shannon wrote: "[C] hess is generally considered to require‘ thinking ’for skilful play" (Shannon 1950: 257). Newell, Shaw and Simon put it: “Chess is the intellectual game par excellence. [...] If one could device a successful chess machine, one would seem to have penetrated to the core of human intellectual endeavor ”(Newell et al. 1958: 320).

Second, a central goal of early AI research was to better understand how the brain worked using the analogy to the computer. Again, this can only be understood against the background of the concept of what thinking is. Because the basic assumption of early AI research - i.e. the paradigm described as cognitivistic or rationalistic - was that thinking is information processing, is based on the observance of reproducible rules and can thus be formalized (Dittmann 2015: 241; Zimmerli & Wolf 1994: 14). This definition of thinking followed on from a long tradition of interpreting thinking formally and logically. “The opinion of how far back the history of AI goes” is therefore divided, as Zimmerli and Wolf stated: “Not only do almost all representations begin with ancient forerunners of the AI ​​discussion, but modern times will soon begin with Descartes , now with Leibniz, now with Babbage, now with Turing, now with McCarthy ”(Zimmerli & Wolf 1994: 7). Descartes had in the mathesis universalis formal reasoning interpreted as a kind of arithmetic, Hobbes stated that thinking is nothing more than arithmetic (ibid .: 10; here also Haugeland 1987: 19–38). With Leibniz in particular, “predictability becomes a criterion of reason” (Zimmerli & Wolf 1994: 10; cf. also Krämer 1988).

It is no coincidence that Babbage was already thinking about whether he could play chess when he was developing his calculating machine. The game of chess with its clear rules and its logical structure corresponded to the concept of what thinking is, as it has been formulated in the sense of formalization and calculation since the early modern period. It is therefore no coincidence that chess became the “Drosophila” of AI research. Rather, it corresponded to the historically powerful concepts:Footnote 15 If thinking works in a formal-logical way, then, according to the assumption, it can be delegated to machines. With regard to chess, Shannon had formulated: "A solution of this problem will force us either to admit the possibilities of a mechanized thinking or to further restrict our concept of thinking" (Shannon 1950: 257).

Third, chess was also suitable for early AI research for pragmatic reasons, as Shannon explained in his 1950 essay. Chess is suitable because “(1) the problem is sharply defined both in allowed operations (the moves) and in the ultimate goal (checkmate); (2) it is neither so simple as to be trivial nor too difficult for satisfactory solution ". And finally: "[...] the discrete structure of chess fits well into the digital nature of modern computers" (Shannon 1950: 257).

The debate about thinking discussed below can only be understood as formal-logical against the background of this tradition of determining thought. The critics of AI research in particular, such as the philosopher Hubert Dreyfus, worked on such an understanding of thinking. However, this concept of thought corresponded to the game of chess as well as the logic of the computer. To a certain extent, a historical tradition of understanding what thinking is connected with the logic of the computer as well as with the regularity and formal structure of the game of chess. However, this seemed promising for a basic understanding of the thinking conceived in this way and its transfer to machines.

The role of computer chess in AI research was correspondingly prominent: "Hundreds of academic papers have been written about computer chess, thousands of working chess programs have been developed, and millions of computer chess matches have been played" (Ensmenger 2012: 6) . Alan Turing had already thought about a chess machine in 1946 and designed a chess program on paper in 1953 (Turing 1987). Konrad Zuse as well as Shannon, McCarthy, Newell, Simon and other early AI researchers also dealt with chess (cf. Levy & Newborn 1991: 24–38).

In the early stages, however, few realized the problems and the time it would take for a computer to beat a human world chess champion. Rather, there were many optimistic predictions (Schaeffer & van den Herik 2002: 1), including Shannon's from 1957, that a chess computer would become world champion within the next ten years. In the 1950s, John von Neumann had developed a program for a 6 × 6 chessboard. In 1958 Alex Bernstein and Michael de V. Roberts wrote a chess program for the IBM 704 computer, which is in the magazine Scientific American was published (Newell et al. 1958). The first chess computer world championship took place in Stockholm in 1974 and the first microcomputer world championship in 1980 (Pfleger & Weiner 1986: 22).

The initial successes in the 1950s sparked euphoria and high expectations. But while amateur players were quickly defeated, computers lost to chess masters for a long time. It was not until 1988 that the computer, Deep Thought, defeated a grandmaster in a game (Campbell et al. 2002: 58). In 1997, the IBM computer Deep Blue finally succeeded in defeating Kasparov.

At IBM, a team had been working on a chess computer since 1989. The research that eventually led to Deep Blue, however, had already started in the mid-1980s at Carnegie Mellon University in Pittsburgh. Inspired by the work of Ken Thompson, a group of PhD students, including Murray Campbell and Feng-Hsiung Hsu, began to develop chips for the game of computer chess. (Goodman & Keene 1997: 11; cf. in detail: Hsu 2002). Hsu worked on the hardware while Campbell worked on the software. One result of their research was the aforementioned Deep Thought, which defeated Danish grandmaster Bent Larsen in 1988. Deep Thought had also played against Kasparov in 1989, but lost significantly. In the late 1980s, Campbell and Hsu were hired by IBM to work on computer chess. Chess masters were also involved in the development of Deep Blue, in particular the American master Joel Benjamin, with whom Deep Blue went to "chess school". It took a total of twelve years to develop Deep Blues. Hsu's book in particular, in which he describes the development, makes it clear what an immense role hardware and speed played. Last but not least, his descriptions also make it clear what immense effort and how much time, including the setbacks, the development of Deep Blues cost.

The A and B strategy: how do computers play chess?

In 1950, Claude Shannon wrote a seminal, widely acclaimed and much-cited article on the possibilities and approaches of chess programming (Shannon 1950). He distinguished two ways, namely the A and the B strategy. The A strategy relies on the calculation of all possible moves up to a certain depth. The calculator searches decision trees. This strategy requires immense computing power, since completely senseless moves are also calculated. It is the so-called brute force method. In 1950, however, Shannon still stated:

Unfortunately a machine operating according to the type A strategy would be both slow and a weak player. It would be slow since even if each position was evaluated in one microsecond (very optimistic) there are about 109 evaluations to be made after three moves (for each side). Thus, more than 16 minutes would be required for a move, or 10 hours for its half of a 40-move game. (Shannon 1950: 269)

The necessity and importance of high computing power becomes clear in this quote, just as it seemed unpredictable that it would be available almost 50 years later.

The B strategy relies on recognizing senseless moves first and, instead of calculating all the moves, starting with evaluations of the situation and privileging certain options: “Select the variations to be explored by some process so that the machine does not waste it's time in totally pointless variations ”(Shannon 1950: 270). This strategy, also known as “selective”, was more similar to human thinking (Newell et al. 1958).Footnote 16 However, since it was far more difficult to implement, efforts soon concentrated on the A strategy (Pfleger & Weiner 1986: 17). This is important insofar as, as Nathan Ensmenger pointed out in his 2012 essay, the decision to use A-logic shaped the course of AI research and became a dominant approach. Many consider this a "fall from man"Footnote 17 AI research, as this has led to a dead end.

The method was further developed in particular with increasing computer capacity in the 1970s and 1980s. It became clear that there was a clear correlation between the search speed, the search depth and the success of the program. The higher the computing power of the computer, the higher its chances of winning. Work was therefore carried out on the development of fast search engines (Schaeffer & van den Herik 2002: 2). Research on computer games has therefore clearly focused on the brute force method, although there have always been doubts about this direction of research.Footnote 18 After slow developments as well as many attempts and tournaments in the 1980s, successes were finally achieved in games against grandmasters in the 1990s (see above).

Deep Blue was based on a search algorithm, a move evaluation function and a huge database of games played. In 1997 the computer was able to "check" 200 million positions in one second and then decide which move to take. The evaluation function, which was developed together with the chess masters Joel Benjamin and Miguel Illesacs, as well as the databases on openings and finals (Newborn 2000: 27; Campell et al. 2002: 79) also helped Deep Blue to achieve success.

Deep blues controversial importance for the history of AI research

Within the historiography of AI research, as well as its critics, it has often been emphasized unanimously that the decision for and dominance of the so-called A strategy, i.e. the brute force method, has the AI ​​from its actual goal, the human Thinking dissuaded to understand. Nathan Ensmenger had explicitly attributed this to the importance of the game of chess in AI research. The concentration on chess in particular led to a theoretical dead end, insofar as no new theoretical knowledge about the functioning of the human brain has emerged from all the extensive research, which has failed to achieve the goal of AI research. Ensmenger's assessment corresponds to the narratives of historiography, in which it is emphasized that the cognitivist paradigm, which Haugeland calls “Good Old Fashion AI” (GOFAI) (Haugeland 1987: 96), has led to a dead end.Footnote 19 This goes hand in hand with a narrative that has become dominant in the meantime, which speaks of an "AI winter" for the 1980s (Nocks 2008: 82), since the lack of success in AI research led to government subsidies being reduced. This was accompanied by a paradigm shift away from the cognitivist paradigm towards connectionism, neural networks and a behavior-based and “embodied” approach (cf., for example, the Nocks overview, 2008; Lenzen 2002). Or as John Haugeland put it, the first phase of AI research was considered over (Haugeland 1987: 96).

Deep Blues Sieg reads as a dead end from a twofold perspective. First, Deep Blue was a failure from the standpoint of so-called strong AI, which aims to gain insight into how the human brain works and understand human thinking. The tremendous arithmetic power and computer power, the methods with which Deep Blue achieved its goal, are in no way equal to human intelligence. No knowledge could be gained about the functioning of the human brain. Second, Deep Blue is based on a formal-logical approach, on programming and structured databases. This approach, which dominated early AI research, is now considered obsolete. It was replaced by so-called connectionism, the neural networks. Deep Blue therefore belongs to a past and now criticized paradigm of AI research.

In particular, the early writings of AI researchers had, as was already well received, emphasized the similarity between human and machine information processing and aimed at a deeper understanding of human thought processes. So it is not surprising that AI research is repeatedly measured by whether the computer model of the mind imitates human thinking and contributes to its basic understanding.

However, Alan Turing had already solved the problem of machine thinking in a pragmatic way in his essay "Computing Machinery and Intelligence" from 1950. Right at the beginning of his text he is known to evade a definition of thinking that seemed fruitless to him and replaced the question with a game, the Turing Test. He also took up the question of whether this test could not lead to a pure simulation, as a form of "intelligence" that does not resemble human thinking. Turing writes: “Isn't it the case that machines do something that can be described as thinking, but which is very different from what humans do?” According to Turing, this objection is serious, “but we can at least say that , even if a machine can be built in such a way that it plays the imitation game satisfactorily, we need not be concerned about it ”(Turing 2007 [1950]: 39). According to Turing, one would speak of machine thinking in a pragmatic way if the machine has passed the Turing test, i.e. is no longer distinguishable from human intelligence, regardless of whether the way of thinking is the same.Footnote 20 With this, however, Turing already abandoned the idea that computer intelligence must be similar to human intelligence. Rather, in the logic of the Turing test, it is sufficient if it is indistinguishable. With regard to computer chess, there are also "tests" in which chess players have to judge games based on whether a computer or a person is playing - which they mostly recognize. The difference in “ways of thinking” is therefore visible.

McCarthy, Minsky, Rochester and Shannon also distinguished between two approaches to intelligent machines in their application for funding the Dartmouth Conference in 1956:

Two approaches, however, appear to be reasonable. One of these is to find how the brain manages to do this sort of thing and copy it. The other is to take some class of real problems, which require originality in their solution and attempt to find a way to write a program to solve them on an automatic calculator. Either of these approaches would probably eventually succeed. (McCarthy et al. 1955: 7)

They had further formulated: "For the present purpose the artificial intelligence problem is taken to be that of making a machine behave in ways that would be called intelligent if a human were so behaving" (ibid .: 9). Hence, behavior is also referred to here that seems intelligent. Similarly, Marvin Minksy later defined Artificial Intelligence by speaking of AI when machines do things that, if human beings do them, require intelligence.

All these remarks refer less to the imitation of the human brain and its fundamental understanding, but rather to the development of computer methods with which tasks are carried out that require intelligence by humans. Shannon, too, had emphasized the differences between human and machine intelligence in his paper on computer chess. The type A strategy in particular does not imitate the human way of playing chess, as was clearly discussed in his paper. However, Shannon suggests the following strategy, somewhat pragmatic, when creating a chess program: “[it] should [match] to the capacities and weakness of the computer. The computer is strong in speed and accuracy and weak in analytical capabilities and recognition. Hence, it should make more use of brutal calculations than humans ”(Shannon 1950: 273).

In addition, Shannon, like McCarthy, had with regard to the AI ​​conference in Dartmouth (inter alia.McCarthy 1955), emphasized right at the beginning of his essay that research on chess is not just about “theoretical possibilities”. At the same time he mentioned potential applications that were "worthy of serious considerations from the economic point of view" (Shannon 1950: 256). Among other things, he named “Machines for performing symbolic (non-numerical) mathematical operations, Machines for making strategic decisions in simplified military operations, machines capable of orchestrating a melody; Machines capable of logical deduction ”(ibid.).

Norbert Wiener had taken up Shannon's considerations and formulated clear concerns, especially with regard to the use of AI, for example in military contexts:

Shannon […] shows, among other things, the possibility that such a device could be the first step in the construction of a machine with which one can evaluate military situations and make the best decision in each case. Don't think he's saying that lightly. […] When Shannon speaks of the development of military tactics, he is not talking in the dark, but rather discussing a highly topical and threatening possibility. (Wiener 1958: 174)

Wiener described the possible development of an AI as "clear and frightening" (ibid .: 176).

Two things are crucial to the arguments in the publications of the early AI researchers. Firstly, it is very clear that imitating and understanding the human brain using computer models was a goal of AI. The way to simulate the brain with methods that precisely distinguish the capabilities of the computer from the human brain (i.e. speed, accuracy, flawlessness) was taken from the beginning - as Shannon had put it, it was a way to use the strengths of the machine that humans did not have. So it can be said that the AI ​​aimed at applications from the beginning and also took the “computer path”, which neglected the goal of fundamental results about the human brain.Footnote 21

With this perspective on AI research, however, secondly, the question of success or failure or "dead end" of results such as Deep Blue arises differently. When it comes to merely simulating human thinking and behavior with computers in order to produce applications, Deep Blue can be rated as a considerable success (cf. also Franchi & Güzeldere 2005: 55 f.), Even if connectionism and learning systems play a far greater role. Deep Blue stands for the success of the application, which takes place in a non-human way. The brute force method is of great importance for many "search-based applications" (Schaeffer & van den Herik 2002: 6) and its success is evident today. Even IBM, with the years of development Deep Blues, did not primarily pursue the goal of impressing with a chess computer and staging a media spectacle. The focus was on specific and now highly relevant applications such as data mining, financial analysis (such as market trends or risk analyzes) and molecular modeling.

This also refers to two completely different scientific concepts: On the one hand, to a science based on fundamental principles and explanations, which each demanded a "real AI" against the supposed, but "only" computer-based successes, i.e. fundamental knowledge about the principles human thinking. This was based on a concept of thought and intelligence tied to human thinking, understanding and consciousness. On the other hand, the debate about Deep Blue points to a pragmatic attitude that relies on the simulation of human thinking and functioning application, without the method of "thinking" resembling the human way of thinking and without having to understand every detail of the computer's thought process.

The fact that the brute force method was repeatedly devalued in the discourse, as will be seen above all in the next section, was on the one hand related to the disappointed expectation that a computer could think in the same way as a person and, for example, to one "Correct" understanding of chess would arrive. On the other hand, it was also related to an anthropocentric defense strategy of human, special thinking.

Thinking vs. stupid arithmetic: a fundamental debate

The debate about the question of thinking revealed these disappointed hopes with regard to a fundamental understanding of human thought, which the critics felt all the more clearly in the context of the spectacular and in the media celebrated "successes" such as Deep Blue or Watson. Basically, after his first game against Deep Blue in 1996, Kasparov formulated the crucial question: Was it justified to speak of thinking or not?

Kasparov had retrospectively commented on the first game against Deep Blue in 1996 as follows: “For the first time, I literally felt, yes smelled, a kind of intelligence on the other side. Although I gave everything, the machine played an easy, wonderful, flawless chess unmoved. I was shocked. [...] Quantity seemed to turn into quality. "Footnote 22 With regard to the second game in the 1997 tournament, Kasparov had emphasized that it felt like he had played against a person. While he formulated this as an accusation, as a suspicion that it had been manipulated,Footnote 23 Representatives of AI research such as Hans Moravec or Ray Kurzweil enthusiastically received this and interpreted it as proof that computers could play like humans (Moravec 1998).

Monty Newborn put it as follows: "At the most fundamental level, Deep Blue’s achievement provoked considerable thought on the subject of what intelligence is all about" (Newborn 2000: 27). Deep Blues' victory raised questions about the relationship between quantity and quality and the question of whether sheer quantity can be converted into quality. There are several comments that address this question.Footnote 24 Deep Blues' success thus updated the debate that has been going on for several decades and has always accompanied AI about the question of whether machines can think.

Above all philosophers criticized the cognitivistic paradigm, the orientation towards chess and the related concept of intelligence and thinking.Footnote 25 In particular, Hubert Dreyfus and John Searle's criticism was widely received, including within AI research. Both commented on Deep Blue, whose "success" they questioned and criticized again with their arguments against AI research, some of which had been developed since the 1960s. At the same time, they argued against a purely rational, formal-logical concept of thinking that had begun to emerge in the 17th century. Dreyfus had already formulated a very fundamental criticism of AI research in his essay "Alchemy and Artificial Intelligence" (Dreyfus 1965) in 1965. In this article he dealt with the research fields and the state of AI. His central argument was that there are capacities and properties of the human brain that in principle cannot be reproduced in computers. He questioned the assumption of early AI research that the human brain processed information in discrete operations. He stated that there was no evidence to support the assumption that information processing in the human brain proceeds in the same way as in the computer. In particular, he named three properties of the human brain that cannot be reproduced in a computer. With regard to the game of chess, he referred to the problem of identifying the moves relevant to the course of the game from the multitude of possible moves - a problem that, as Shannon's article from 1950 makes clear, was also discussed within AI research. The exponential number of trains made it impossible, Dreyfus argued in 1965, that the computer had this problem brute force solve. At the same time, the specific ability of people to select meaningful moves cannot be represented digitally. Basically Dreyfus came to the conclusion that the B strategy developed by Shannon could not be formalized and the A strategy could not be implemented due to a lack of computer power. He considered the ability of heuristic selection to be an exclusively human quality, which he called "fringe consciousness", based on William James, a "marginal" or "vague awareness" (Dreyfus 1965: 21). He also named the ambiguity of language as the fundamental limits of AI, the fact that meaning always results from the context of the words, the inability of computers to distinguish between the important and the unimportant and, thirdly, the high complexity of the world, human everyday life, which can only be revealed in a bodily interaction with the world, while computers remain limited to micro-worlds. The contrast between the complex everyday life and the complex abilities of the human brain and the closed, regular world of chess has always been emphasized by him and many other AI critics.

Dreyfus developed these arguments further in his later publications, particularly in his book, which was published in several editions What Computer Can’t Do (Dreyfus 1985). He also repeated his arguments in the introduction to the second edition. In an edition from 1992, he dealt with neural networks in a further introduction, which, however, could not eliminate his basic skepticism (Dreyfus 1992). Dreyfus therefore fundamentally questioned the basic assumptions of AI, in particular the premise that the brain and computer would function according to the same principles. He criticized a narrowing of the concept of reason / intelligence to rational thinking, a “platonic reduction of all thinking to explicit rules” (Dreyfus 1985: 179). However, intelligence cannot be equated with “abstract logical thinking” (Dreyfus & Dreyfus 1987: 17). Dreyfus therefore came to the conclusion that AI will never be intelligent in the human sense, not even with neural networks.

In short, at the heart of his statements is a fundamental skepticism that human thinking can be reproduced using digital computers. A "success" like Deep Blues' victory is not an imitation of human intelligence and therefore does not bring any deep insights and knowledge about human thinking. In his 1965 article in particular, Dreyfus emphasized that "there is no reason to deny the evidence that human and mechanical information processing proceed in entirely different ways" (Dreyfus 1965: 63). However, Dreyfus admitted to being surprised after Deep Blues won. After the match took place on the US radio show NewsHour a discussion took place between Dreyfus and Daniel Dennett on the relevance of Deep Blue's victory to AI research. This conversation was in the electronic journal Slate continued and reprinted in 2005 (cf. Dreyfus & Dennett 2005). Dreyfus also insisted that Deep Blue had not succeeded in playing chess in the human sense. With Dennett, he agreed that Deep Blue was irrelevant to AI research and that GOFAI was a dead end. In this unity, the conversation quickly turned away from the subject. Rather, the two philosophers discuss the robot Cog, whereby in principle the same lines of argument emerged, this time with regard to the question of the extent to which emotions can be simulated. Dreyfus again fundamentally questioned the possibilities of AI and assumed a fundamental limit of digital possibilities (cf. ibid.). He also expressed himself again skeptically that serious successes could be achieved with computer methods. He spoke of “symbol-crunching computers that would never even approach the problem-solving abilities of human beings” (quoted from Bringsjord 1998).

The criticism of the philosopher John R. Searle was similarly aimed at the concept of thinking, understanding, and especially the category of meaning. Well-known and repeatedly quoted and discussed is his thought experiment of the “Chinese Room”, with which he illustrated that computers can apparently produce meaning, but understand no meaning (Searle 1986). Searle's goal was to show that the question of whether computers can think must be strictly answered in the negative. He referred to the computer's purely formal compliance with rules, which, according to his argument, only appears to be intelligent, but actually does not enable understanding: “Syntax alone is not enough for semantics, and digital computers, as computers, by definition, have only a syntax “(Ibid .: 33).

In this sense, Searle had expressed himself with regard to Deep Blue. He had Kurzweil's book The Age of Spiritual Machine and accused him of calling Deep Blue evidence of an example of computer intelligence (Searle 1999). Searle, on the other hand, insisted that Deep Blue was not intelligent, but simply a “number eater”. The machine doesn't think, don't play chess, it doesn't even simulate the behavior of a chess player. Deep Blue is neither a replica of humans nor does it provide any knowledge about the human brain. Searle argued along the lines of his own Thought experiment. He accused Kurzweil in a further reply that his statements would “suffer from a persistent confusion between simulating a cognitive process and duplicating it, and even worse confusion between the observer-relative, in-the eye-of-the beholder-sense of concepts like intelligence, thinking etc., and the observer-independent intrinsic sense ”(ibid.). In a nutshell, he criticized that Kurzweil confused the simulation of thinking with real thinking. Searle repeatedly refers primarily to the impossibility of the transition from syntax to semantics and the impossibility of getting from quantity, as embodied by Deep Blue par excellence, to quality.

Philosophical critics of AI, of which two prominent examples have been examined in more detail here, therefore fundamentally contradicted the possibility of simulating human thinking and human intelligence with the computer. These critics saw Deep Blue as a symbol of a formal-logical approach in AI research, which was criticized as being reductionist and purely arithmetic. As Manuela Lenzen summarized: “The central criticism of the computer model of the mind is therefore that it is a dead end as a program for modeling human intelligence. There is no way from the chess computer Deep Blue to the typical achievements of general human intelligence ”(Lenzen 2002: 65). Deep Blue only mastered one task, playing chess, and with that he was far removed from the capabilities of the complex human brain and he also did this with purely arithmetic methods. From that perspective, Deep Blue was clearly an annoying pseudo-success that was basically a failure.

In line with the arguments of philosophical criticism just presented, computer chess has also been commented on in German chess magazines, in which computer chess has been a repeatedly discussed topic since the 1970s.Footnote 26 According to the criticism, playing chess does not consist of a "series of logical-mathematical operations", as is assumed in computer chess. A chess player does not “only perform logical operations”, he also plays and therefore needs qualities such as “imagination, ability to combine, sagacity, wit, courage, caution”. He also feels feelings such as fear, joy or hope during the game. The game of chess can therefore not be reduced to "pure thinking" in the sense of logical and calculable operations. At best, a computer can imitate an elementary part of playing chess, but not the “game of chess as a specific whole”.Footnote 27 Here, too, arguments were made against the tradition of the formal-logical concept of thought and reference was made to the meaning of emotions, intuitions, experience, etc., which is why what the computer does cannot be described as thinking.

In 1981, ex-world champion Boris Spasski criticized the fact that chess programs could calculate a little, but did not take into account the principles of positional play. He was skeptical that a chess computer "will ever be able to proceed analytically like a grandmaster".Footnote 28 Last but not least, the argument is often found that the computer lacks something like intuition,Footnote 29 that playing is not calculated straight awayFootnote 30