(What is a Request for Research/RFR?)
This RFR is about creating a 3D simulation environment for use in human level intelligence (HLAI) and Artificial General Intelligence (AGI) research. Though this is rather a request for engineering, it’s an essential part of the research process, and you can do your own research with the test suite you create.
Engineering human-like AI requires a sophisticated test environment. And as being human-like means being able to act in a 3D environment, testing human-like AI requires a 3D environment. Moreover, as humans learn in the same 3D environment they inhabit, the environment for testing should also be used for learning. The environment could be a physical one, but using physical robots in the physical environment costs a lot, consumes a lot of time, and is difficult to reproduce. So, we suggest 3D simulated environments instead.
There seem to be few freely available, 3D simulation environments that meet our criteria (see the Objective below). Note that Python is assumed to be the language of choice for ML/AI research.
To create an open-source software package containing:
We’ve outlined a set of example scenes below. We recommend to create a simpler one first and then move to more complicated ones.
Mazes are used to test memory, learning, and navigation capabilities of rodents [Brigman 2010][Wolf 2016]. Mazes provide a good starting point to work on simulation environments because of their simplicity. The environment must have at least walls and the agent must be able to perceive the walls and to move. Once you have created an environment, you should test it at least with a cognitive test in the papers [Brigman 2010][Wolf 2016].
Crows plan to use tools. For example, they plan for getting tools to get food (video)(see other tasks). An environment to reproduce the test in the video should include cases, sticks, and stones. The agent must be able to move and to perceive and manipulate the objects. Once you have created an environment, you could test it at least with a simpler task that might be learned with reinforcement learning.
Human language is grounded in the physical world; namely, the meaning of words and phrases is acquired from the interaction between agents (humans) and the environment. Moreover, language is acquired from interaction with other language speaking agents (caretakers). So, a simulator for human language acquisition should contain simulated physical objects and language using agents. The agents (language learner and caretaker) must be able to perceive and manipulate the objects, and to utter and ‘hear’ linguistic expressions consisting of primitives such as phonemes. Note that the caretaker can be choreographed. Once you have created an environment, you could test it with a task in Chapter 7 of [Cangelosi 2015].
This is about simulation in which a robot makes a cup of coffee in a kitchen new to the robot. It is named after Steve Wozniak’s claim that such a robot will never see the light. The environment must include a counter and a shelf of the kitchen, a coffee maker, a water faucet or a water bottle, coffee filters, coffee powder, etc. The robot should be able to move and to perceive and manipulate objects in the kitchen. Related tasks are set in RoboCup@Home. While they use physical robots, you can use their tasks to test the suite you create.
The simulated agents here must be capable of possessing the following functions:
The agent can move with wheels and have just one manipulator for simplicity (such as Toyota’s HSR robot [see a blog article here for its simulator]). Expertise in robotics required for controlling the manipulator should be discussed in robotics communities.
All the tools for implementation should be open-source. In particular, we ask you to use Pybullet for the simulation engine and Python for the programming language.
Pybullet is ‘an easy to use Python module for physics simulation.’ It has its rendering capability and can incorporate standard object definition formats such as URDF and popular machine learning environments such as DQN. OpenAI Gym, a popular reinforcement learning toolkit, is integrated into Pybullet.
We ask you to use Python because it is now widely used among machine learning (or AI) engineers, and admitting other languages would reduce the usability of the system (conventional 3D environments uses other languages such as C#, C++, and Lua).
The suite should be easily to install on major operating systems such as Linux, OS X, and Windows.
At this moment, please avoid using ROS as this causes complications when used with OS X.
The RFR is about engineering test suites. So please provide:
You can check the following open-source 3D simulation environments for AI agents.
For other 3D simulators, please look in to this article: Touchy Simulations are fuel for Strong AI. Also note that WBAI has tried to make simulators and is experiencing difficulty with its maintenance:
[Cangelosi 2015] cites a few robotic simulation environments for developmental robotics including the following.
Jonathan L. Brigman, et al.: “Predictably irrational: assaying cognitive inflexibility in mouse models of schizophrenia,” https://doi.org/10.3389/neuro.01.013.2010 (2010)
Andrea Wolf, et al.: “A Comprehensive Behavioral Test Battery to Assess Learning and Memory in 129S6/Tg2576 Mice,” https://doi.org/10.1371/journal.pone.0147733 (2016)
Angelo Cangelosi: Developmental Robotics, MIT Press (2015)
Snorre Alm Harestad: The Nao Robot as a Platform for Evolutionary Robotics, Master’s Thesis, University of Oslo (2015)
For discussion, please join us in our reddit thread on Agent Environment.