RFR: 3D Agent Test Suites

Proposed by Project AGI  (current Cerenaut) and the Whole Brain Architecture Initiative

(What is a Request for Research/RFR?)

Summary

This RFR is about creating a 3D simulation environment for use in human level intelligence (HLAI) and Artificial General Intelligence (AGI) research.  Though this is rather a request for engineering, it’s an essential part of the research process, and you can do your own research with the test suite you create.

Background and motivation

Engineering human-like AI requires a sophisticated test environment.  And as being human-like means being able to act in a 3D environment, testing human-like AI requires a 3D environment.  Moreover, as humans learn in the same 3D environment they inhabit, the environment for testing should also be used for learning.  The environment could be a physical one, but using physical robots in the physical environment costs a lot, consumes a lot of time, and is difficult to  reproduce. So, we suggest 3D simulated environments instead.

There seem to be few freely available, 3D simulation environments that meet our criteria (see the Objective below).  Note that Python is assumed to be the language of choice for ML/AI research.

Status

Open.  See the bottom of the page.

Objective

To create an open-source software package containing:

  • a 3D agent simulation environment framework with a physics engine.
  • a sample reusable ‘scene’ defined with a standard object definition format such as URDF and possibly scripted in Python.
  • a sample reusable agent also defined with a standard object format and scripted in Python.
  • the number of dependencies should be minimized and the number of supported environments (e.g. operating systems) should be maximized
  • where possible, build on top of existing software

Success criteria

  • The sample environment and agent can be easily customized.
  • It is well tested.
  • It is well documented.

Detailed Project Description

Model characteristics:

We’ve outlined a set of example scenes below. We recommend to create a simpler one first and then move to more complicated ones.

  1. Rodent Cognitive Test Suite (Mazes)

    Mazes are used to test memory, learning, and navigation capabilities of rodents [Brigman 2010][Wolf 2016].  Mazes provide a good starting point to work on simulation environments because of their simplicity. The environment must have at least walls and the agent must be able to perceive the walls and to move.  Once you have created an environment, you should test it at least with a cognitive test in the papers [Brigman 2010][Wolf 2016].

  2. Crow Cognitive Test Suite

    Crows plan to use tools.  For example, they plan for getting tools to get food (video)(see other tasks).  An environment to reproduce the test in the video should include cases, sticks, and stones.  The agent must be able to move and to perceive and manipulate the objects. Once you have created an environment, you could test it at least with a simpler task that might be learned with reinforcement learning.

  3. Language Acquisition Test Suite

    Human language is grounded in the physical world; namely, the meaning of words and phrases is acquired from the interaction between agents (humans) and the environment.  Moreover, language is acquired from interaction with other language speaking agents (caretakers). So, a simulator for human language acquisition should contain simulated physical objects and language using agents.  The agents (language learner and caretaker) must be able to perceive and manipulate the objects, and to utter and ‘hear’ linguistic expressions consisting of primitives such as phonemes. Note that the caretaker can be choreographed. Once you have created an environment, you could test it with a task in Chapter 7 of [Cangelosi 2015].

  4. ‘Wozniak’ Test Suite

    This is about simulation in which a robot makes a cup of coffee in a kitchen new to the robot.  It is named after Steve Wozniak’s claim that such a robot will never see the light. The environment must include a counter and a shelf of the kitchen, a coffee maker, a water faucet or a water bottle, coffee filters, coffee powder, etc.  The robot should be able to move and to perceive and manipulate objects in the kitchen. Related tasks are set in RoboCup@Home.  While they use physical robots, you can use their tasks to test the suite you create.

General requirement for the agents

The simulated agents here must be capable of possessing  the following functions:

  • Visual perception
  • Locomotion
  • Reward (external-internal) input
  • [Straight/Rotational] Acceleration and speed sensors
  • Manipulator (with tactile perception) (for the suites 2, 3, 4)
  • Text input and output (for the suite 3)
  • Speech/audio function (optional for the suite 3)

The agent can move with wheels and have just one manipulator for simplicity (such as Toyota’s HSR robot [see a blog article here for its simulator]).  Expertise in robotics required for controlling the manipulator should be discussed in robotics communities.

Computational Tools for Implementation:

All the tools for implementation should be open-source.  In particular, we ask you to use Pybullet for the simulation engine and Python for the programming language.

Pybullet

Pybullet is ‘an easy to use Python module for physics simulation.’  It has its rendering capability and can incorporate standard object definition formats such as URDF and popular machine learning environments such as DQN.  OpenAI Gym, a popular reinforcement learning toolkit, is integrated into Pybullet.

Python

We ask you to use Python because it is now widely used among machine learning (or AI) engineers, and admitting other languages would reduce the usability of the system (conventional 3D environments uses other languages such as C#, C++, and Lua).

Multiple Platforms

The suite should be easily to install on major operating systems such as Linux, OS X, and Windows.

At this moment, please avoid using ROS as this causes complications when used with OS X.

Dataset and Tests

The RFR is about engineering test suites.  So please provide:

  • The documentation of the specification of the suite you have created.
  • A report of the tests you have done with the suite.
  • Description of possible cognitive tests that can be carried out with the suite.

Related Work

You can check the following open-source 3D simulation environments for AI agents.

  • OpenAI Roboschool
    Since it uses Bullet and OpenAI Gym, it is quite similar to Pybullet.  While you can look into the code, its installation is less straightforward than Pybullet.  
  • DeepMind Lab
    It is a like-minded project with this RFR, but it uses LUA for the script language and Quake III Arena for the physics engine.  One option is to reimplement a similar environment with PyBullet for testing particular cognitive functions.
  • Unity ML-Agents Toolkit
    Yet another project based on ideas similar to the ones presented here.  It uses C# for its scripting language and Nvidia PhysX for its default physics engine.  It is rather heavy-weight and C#-based scripting may deter general Python engineers from developing new agents and environments.
  • SigVerse
    It is a heavy implementation with ROS and the Unity game engine, incorporating also human-robot interaction in the virtual reality.

For other 3D simulators, please look in to this article: Touchy Simulations are fuel for Strong AI.  Also note that WBAI has tried to make simulators and is experiencing difficulty with its maintenance:

  • Life in Silico (LIS)
    The look and feel may be what this RFR expects.  It is difficult to maintain as it uses the Unity game engine.
    👉 Please use PyLIS with PyBullet.
  • Prototype with PyGazebo (video)
    A simulated maze would look like this.  PyGazebo is not for everyone because it uses ROS.
  • The environment for the WBA Hackathon 2017
    This LIS-based environment is also a maze to test memory and learning of virtual rodents.

[Cangelosi 2015] cites a few robotic simulation environments for developmental robotics including the following.
Non-commercial:

Commercial:

References

Jonathan L. Brigman, et al.: “Predictably irrational: assaying cognitive inflexibility in mouse models of schizophrenia,” https://doi.org/10.3389/neuro.01.013.2010 (2010)

Andrea Wolf, et al.: “A Comprehensive Behavioral Test Battery to Assess Learning and Memory in 129S6/Tg2576 Mice,” https://doi.org/10.1371/journal.pone.0147733 (2016)

Angelo Cangelosi: Developmental Robotics, MIT Press (2015)

Snorre Alm Harestad: The Nao Robot as a Platform for Evolutionary Robotics, Master’s Thesis, University of Oslo (2015)


For discussion, please join us in our reddit thread on Agent Environment.

Status

This RfR is Open.

Note: 2019-05

WBAI has created a 3D simulation environment called PyLIS.  As it meets our criteria (see the Objective above), we strongly recommend it to the readers of this RFR.  With PyLIS, you can start creating suites in the “Model Characteristics” section right away.