OpenAI, a non-profit Artificial Intelligence research organization, has released Neural MMO – a massively multiagent game environment for reinforcement learning agents. The platform supports a large, variable number of agents within a constant and open-ended task. According to OpenAI, the game genre of Massively Multiplayer Online Games (MMOs) accelerates a large ecosystem of a variable number of players competing in persistent and wide-ranging environments.
The San Francisco-based AI company utilized Neural MMO to train an AI-powered system by rewarding agents for their lifetime, such as how long they managed to stay alive and found that the longer the agents interacted with each other, the better they became at certain tasks and that increasing the maximum number of concurrent agents magnified their exploration. They also found that increasing the agents’ number size impelled them to spread out within various parts of the map and that agents trained in larger settings persistently outperformed those trained in smaller settings.
Available on GitHub, Neural MMO is designed to fortify a large number of agents, as much as 128 in each of 100 concurrent servers. It packs in baselines trained on more than 100 worlds against which the agents’ performance can be compared, and the computational overhead is relatively low, training only needs a single desktop CPU. OpenAI, in December last year, introduced a classic platform, named CoinRun, aimed to measure agents’ talent to handle their reviews to unfamiliar eventualities. And back in August, a team of researchers at the College of Adger in Norway open-sourced a setting for Artificial Intelligence training in real-time system video games.