1.1 Literature reviewThe first step of our study was to find an exhaustive survey in the field of artificial intelligenceapplied to game solvers. The purpose was to quickly identify and name the many existing techniquesfor further reading.We selected the excellent work of Bruno Bouzy and Tristan Cazenave: “Computer Go: An AIoriented survey” [7], a 50 pages paper published in 2000 where the authors study in depth manyapproaches used to play the game of Go. The authors start by explaining the rules of Go andshowcase a few common strategies used by human players. They also introduce several two-playerperfect information games such as Chess, Checker or Othello and compare their complexity withGo. After the introduction, they systematically describe every technique used in two-player gamesolvers and assess their results using existing implementations.The paper has an important bibliography of 149 references, all of them directly cited in the text.Using this survey as a starting point, we selected 42 references for further reading. In the conclusionof the survey, the two authors cite two promising reinforcement learning techniques to solve games:Temporal Difference and Monte Carlo. Because we lacked information about those two techniqueswe included the book by Richard S. Sutton and Andrew G. Barto: “Reinforcement Learning -An Introduction” [38] to our readings. The book is very complete and explains in depth the keyideas behind autonomous learning. It demonstrates the relationship between exploration (to gainknowledge) and exploitation (of that knowledge), and it also explains how we can quickly gatherknowledge by using state-action-reward mechanisms and various algorithms. While the book isgreat to understand in depth the basis of learning, the many algorithms it showcases all sharea hard requirement that is impossible to fulfil in the game of Freecell. Details are given in theLearningsection.From our initial readings, the following terms have been identified (in any order): tree search,heuristic search, minimax, alpha-beta pruning, iterative deepening, transposition table, proof-number search, mathematical morphology, computer vision, neural network, Monte Carlo, planning,temporal difference, knowledge acquisition, best-first, A, IDA, reinforcement learning.Even if the initial readings were fruitful to understand most of those techniques, the papers werequite old (published before 2000) and some domains related to learning (neural network, geneticprogramming and reinforcement learning) were not correctly addressed. In order to fill the gaps, weused Research Gate and Google Scholar to search for recent papers about the techniques mentionedabove. We also searched for papers about single-player games such as Freecell, Klondike andRush-Hour.As we were efficient to discover interesting papers via snowball and via search engines, we did notperform a rigorous systematic review. Today our bibliography is strong of about 60 papers in thedomain of artificial intelligence and game solvers. We are confident that we read the majority ofpapers about informed tree search algorithms. We are also confident that wedid notread enoughabout solvers using deep-neural-network techniques like AlphaGo. The reason we did not learn