Image Credit Squidoodle
AI has made significant progress in recent years, reaching superhuman performance on a wide range of tasks. Humans are no longer the best Go players, quiz-show contestants, or even, in some respects, the best doctors. Yet state-of-the art AI cannot compete with simple animals at adapting to unexpected changes in the environment. This competition pits our best AI approaches against the animal kingdom to determine if the great successes of AI are now ready to compete with the great successes of evolution at their own game.
The Playground (early version)
We are proposing a new kind of AI competition. Instead of providing a specific task, we will provide a well-defined arena (available at the end of April) and a list of cognitive abilities that we will test for in that arena. The tests will all use the same agent with the same inputs and actions. The goal will always be to retrieve the same food items by interacting with previously seen objects. However, the exact layout and variations of the tests will not be released until after the competition.
We expect this to be hard challenge. Winning this competition will require an AI system that can behave robustly and generalise to unseen cases. A perfect score will require a breakthrough in AI, well beyond current capabilities. However, even small successes will show that it is possible, not just to find useful patterns in data, but to extrapolate from these to an understanding of how the world works.
April 30th: We are excited to announce partnership with the Whole Brain Architecture Initiative who are sponsoring a standalone prize of $4,000 for the most biologically plausible entry. We hope this will lead to some exciting entries that not only behave like their biological counterparts, but are structured like them as well. The WBA prize winner will not need to be the best overall agent in the competition but will have to pass some simple baseline to be competitive. More details about this extra prize will be released closer to the competition start date at the end of June.
Experimental environments will be created in the Unity ML-Agents Toolkit, and the specification of the building blocks for all tasks will be freely available to all participants of the competition. Participants are allowed to use any methods and to experiment as much as they like in preparation for the competition, but the exact details of the tests will be kept secret. Participant performance will be measured on a range of tasks from simple combinations of the building blocks to complex tasks, each designed to probe certain cognitive abilities. Top prize will be for performance across the range of tests.