robot_simulator

Robotic simulators can be used as virtual environment in which AGI (wannabe) systems are trained and evaluated.

This page describes desired specifications of the robotic simulator to be used around WBAI.
Please also check our request for research: 3D Agent Test Suites.

A sample environment with PyGazebo and an agent controlled with BriCA, Brain SimulatorTM, or Nengo can be found on GitHub.
LIS (Life in Silico), another environment with the Unity Game Engine and Chainer is being developed.

Simulator

Currently a prototype is being developed with PyGazebo (video) and with the Unity Game Engine (LIS above).

Control environment

Robots are to be controlled with BriCA, Brain SimulatorTM, or Nengo from outside of the simulator.
Recommended controlling language is Python (as it is easy to use & low in platform dependency).

Task environment

With regard to rodent-level intelligence, mazes for behavioral tests are to be implemented.
As for task environment for human-level intelligence, the simulation environment for RoboCup@Home will be considered. Currently, their referential environment is implemented with SigVerse. So, if one is to contest in a Robocup league, s/he would have to use SigVerse. However, as the simulator we use for our prototype is PyGazebo, we might propose them to use PyGazebo in future Robocup…

The body shape of a simulated robot may be:

It is desirable that a simulated robot has the following functions:
(See the Input and Output sections below for detail.)

  • Visual perception
  • Reward (external-internal)
  • [Straight/Rotational] Acceleration and speed sensors
  • Audio perception (optional)
  • Locomotion
  • Manipulator (optional)
  • Text input and output (optional)
  • Emotional expression (optional)
  • Speech function (optional)

It is desirable that a simulated robot has the following input functions.

  • Visual perception
    • Color perception
      While animals do not always have color vision, for the engineering purpose, it is easier to process visual information with color.
    • Depth perception
      While animals do not always have stereo vision, for the engineering purpose, it is easier to process visual information with depth.
  • Reward
    • External reward is given when, for example, the robot gets a specific item (e.g., bait).
    • Internal reward is given by internal logic when, for example, curiosity is satisfied.
  • Tactile perception (optional)
    • A manipulator (optional) requires it.
  • Acceleration sensor [Straight/Rotational] (optional)
  • Speed sensor [Straight/Rotational] (optional)
    Not recommended as it is not biologically realistic.
    It is rather recommended to estimate the speed from vision.
    The method for odometry is to be determined.
  • Audio perception (optional)
    Required when the task requires audio-based language processing.
    (See Auditory information processing below.)
  • Text input (optional)

It is desirable that a simulated robot has the following output functions.

  • Locomotion
    • Default: LR bi-wheel
    • Challenger’s option: N-pedal walker
  • Manipulator (optional)
    As a minimal way for object manipulation, a robot can move exterior objects by pushing them with its own body.
  • Vocalization (optional)
    One of the following:
    • Text-to-speech (parser and word-phonetic dictionary)
    • Phoneme vocalizer (Phoneme sets are language dependent.)
    • General-purpose sound synthesizer
  • Text output (optional)
  • Emotional expression (optional)
    Robots with social interaction may require it.

While perceptual information processing may be implemented with machine learning algorithms, when it is not the main subject of research, it would be easier to use off-the-shelf libraries. With the simulator, some information may be obtained ‘by cheating’ directly from the simulation environment.
APIs are to be wrapped for access from BriCA / BrainSimulatorTM.

  • Visual information processing

The following are to be served:

  • Deep Learning APIs for visual information processing
  • Image processing API such as OpenCV / SimpleCV

And the following may be served as options:

  • Object detection API
    (utilizing depth info.)
    • Border Ownership / Figure Ground Separation API
    • Measuring the apparent size, relative direction, distance, relative velocity, etc.
  • Face detection API
  • Human skeletal mapping API (as found in Kinect API)
  • Facial expression recognition API (adapted to the facial expression of robots)
  • Auditory information processing
  • Sound and speech recognition API
    Functions such as seen in Julius / HARK
  • robot_simulator.txt
  • Last modified: 2018/12/29 10:21
  • by n.arakawa