This week, engineers from DeepMind (a subsidiary of Google/Alphabet) published a paper in Nature describing their newest AI advancement. The software, AlphaGo Zero, learned to play Go through a self learning algorithm/neural network by playing itself. After being provided only the rules of the game and playing itself for just 3 days, the newcomer played the original AlphaGo and won a staggering 100 to 0. The original version of AlphaGo was able to beat the human world champion earlier this year.
By 40 days into its development Zero was able to successfully defeat the most advanced version of AlphaGo more than 90% of the time. While Go has been held as one of the most complex and difficult games to learn, is this simply a sign that the combination of computer processing capability + brute force algorithms have reached the stage where it is a simple game or are we seeing some true self-learning advances on the AI front? What else seems complex and yet may fall under brute force breakable with modern systems? In any case, should everyone named Sarah Connor be worried?