Google’s DeepMind has been playing video games on the Atari since 2014, and it got pretty good too, beating human scores. The problem was, it couldn’t remember how it did it. So, every time a new Atari game was introduced, a new neural network was created, but in doing this, the AI could never benefit from its own learned experiences. However, a group of researchers from DeepMind in collaboration with those at Imperial College London has been busy creating an algorithm that could change all that.
The new algorithm allows the AI to learn, retain, and then reuse the knowledge that it learns. This is done through the use of supervised learning and reinforcement learning tests. For humans, the basis for continual learning is synaptic consolidation. Being able to save knowledge and use it later to carry out a different task is an essential part of how humans learn. This is one area where machine learning has come unstuck.
DeepMind is an impressive system; now it can retain the most vital information from previous experiences to learn. But even so, it has its limitations and still can’t perform as good as a neural network that completes just one single game. If machine learning is to match real world learning the next step is the efficiency of learning. With the new algorithm supporting continual learning just like that of a human brain it’s a huge leap forward for AI and one that will hopefully allow more creative and intellectual tasks to be carried out.
More News to Read