With image recognition, natural language recognition, game engines and other technologies that are becoming more and more common and embedded in everyday technology, AI seems to be slowly permeating every facet of our daily lives and business (Arago 2017). You can speak to a personal assistance like Alexa to ask about the weather, take a picture and ask Microsoft caption bot what it shows and play chess against an AI-based software program.

With the recent win of Google’s AlphaGo over world champion Lee Sedol in the ancient Chinese game Go, the attention on artificial intelligence (AI) has significantly risen (DeepMind 2017). Mastering the game of Go was considered as one of the major challenges for AI and signifies an impressive breakthrough.

Although IBM’s Deep Blue supercomputer was able to beat Garry Kasparov in 1997, Go is significantly more complex than chess. For example, after the first two moves in a Chess game, there are 400 possible next moves. In Go, there are close to 130,000 possible moves (Muoio 2016). Therefore, the Google’s DeepMind team couldn’t use brute- force AI, which is when a program maps out every possible game state in a decision tree because there are simply too many possible moves. With the help of deep neural networks and supervised learning, Google was able to train the system effectively.

This initially impressive result was toped in October 2017 by introducing AlphaGo Zero which learned to master Go entirely by self-play known as reinforced learning. This model has beaten the initial version 100-0. This principle was further developed to AlphaZero, a similar model that trains a neural network architecture with a generic reinforcement learning algorithm which has beaten some of the best engines in Shogi and chess.

Time will tell where the superior qualities of reinforced learning can be applied in games and in business.

– Alexander Roznowski, February 2018