Table of Contents
Robot Simulator
Robotic simulators can be used as virtual environment in which AGI (wannabe) systems are trained and evaluated.
This page describes desired specifications of the robotic simulator to be used around WBAI.
Please also check our request for research: 3D Agent Test Suites.
Simulation Environment
A sample environment with PyGazebo and an agent controlled with BriCA, Brain SimulatorTM, or Nengo can be found on GitHub.
LIS (Life in Silico), another environment with the Unity Game Engine and Chainer is being developed.
Simulator
Currently a prototype is being developed with PyGazebo (video) and with the Unity Game Engine (LIS above).
Control environment
Robots are to be controlled with BriCA, Brain SimulatorTM, or Nengo from outside of the simulator.
Recommended controlling language is Python (as it is easy to use & low in platform dependency).
Task environment
With regard to rodent-level intelligence, mazes for behavioral tests are to be implemented.
As for task environment for human-level intelligence, the simulation environment for RoboCup@Home will be considered. Currently, their referential environment is implemented with SigVerse. So, if one is to contest in a Robocup league, s/he would have to use SigVerse.
However, as the simulator we use for our prototype is PyGazebo, we might propose them to use PyGazebo in future Robocup…
Robot (Overview)
The body shape of a simulated robot may be:
- Two-wheel turtle (good enough for rodent-level)
- ‘Centurtle’: with a turtle-like lower body and humanoid upper body (as used in the Robocup@Home Simulation League)
- Bipedal humanoid
It is desirable that a simulated robot has the following functions:
(See the Input and Output sections below for detail.)
- Visual perception
- Reward (external-internal)
- [Straight/Rotational] Acceleration and speed sensors
- Audio perception (optional)
- Locomotion
- Manipulator (optional)
- Text input and output (optional)
- Emotional expression (optional)
- Speech function (optional)
Input
It is desirable that a simulated robot has the following input functions.
- Visual perception
- Color perception
While animals do not always have color vision, for the engineering purpose, it is easier to process visual information with color. - Depth perception
While animals do not always have stereo vision, for the engineering purpose, it is easier to process visual information with depth.
- Reward
- External reward is given when, for example, the robot gets a specific item (e.g., bait).
- Internal reward is given by internal logic when, for example, curiosity is satisfied.
- Tactile perception (optional)
- A manipulator (optional) requires it.
- Acceleration sensor [Straight/Rotational] (optional)
- Speed sensor [Straight/Rotational] (optional)
Not recommended as it is not biologically realistic.
It is rather recommended to estimate the speed from vision.
The method for odometry is to be determined. - Audio perception (optional)
Required when the task requires audio-based language processing.
(See Auditory information processing below.) - Text input (optional)
Output
It is desirable that a simulated robot has the following output functions.
- Locomotion
- Default: LR bi-wheel
- Challenger’s option: N-pedal walker
- Manipulator (optional)
As a minimal way for object manipulation, a robot can move exterior objects by pushing them with its own body. - Vocalization (optional)
One of the following:- Text-to-speech (parser and word-phonetic dictionary)
- Phoneme vocalizer (Phoneme sets are language dependent.)
- General-purpose sound synthesizer
- Text output (optional)
- Emotional expression (optional)
Robots with social interaction may require it.
Perception API
While perceptual information processing may be implemented with machine learning algorithms, when it is not the main subject of research, it would be easier to use off-the-shelf libraries. With the simulator, some information may be obtained ‘by cheating’ directly from the simulation environment.
APIs are to be wrapped for access from BriCA / BrainSimulatorTM.
- Visual information processing
The following are to be served:
- Deep Learning APIs for visual information processing
- Image processing API such as OpenCV / SimpleCV
And the following may be served as options:
- Object detection API
(utilizing depth info.)- Border Ownership / Figure Ground Separation API
- Measuring the apparent size, relative direction, distance, relative velocity, etc.
- Face detection API
- Human skeletal mapping API (as found in Kinect API)
- Facial expression recognition API (adapted to the facial expression of robots)
- Auditory information processing