Google LLC’s DeepMind released a new deep learning AI system named Muzero, which can master games without knowing the rules beforehand. The latest artificial Intelligence agent’s impressive game solving skills stem from planning the next action in any given scenario instead of modeling the entire situation.
DeepMind extensively tested MuZero on games built explicitly for testing neural networks like Go, Chess, Shogi, and Atari 57. MuZero outperformed all the prior algorithms and also surpassed the superhuman performance in these games.
DeepMind’s Artificial Intelligence agent, MuZero, masters games without knowing the rules
Traditional AIs come with many limitations, such as not being able to plan their next move without being fed with relevant information beforehand. This inhibition makes AI a challenge to apply in complex real-life scenarios where data is not readily available. However, MuZero, works differently from traditional AI and DeepMind’s previous agent, AlphaGo. Instead of modeling the whole situation, MuZero only plans its next action. Basically the new agent learns the game while learning how to play it. After all, knowing that an umbrella can keep one dry is more helpful than modeling the shape of the raindrop in the air, stated a researcher from DeepMind.
The ability to plan is an essential aspect of human intelligence, allowing people to solve problems and make decisions. If AIs can master this cognitive skill of generalizing what they learn and use it in planning ahead, then the possibilities are limitless.