Show pageOld revisionsBacklinksBack to top This page is read only. You can view the source, but not change it. Ask your administrator if you think this is wrong. ===== Robot Simulator ===== Robotic simulators can be used as virtual environment in which AGI systems are trained and evaluated. This page describes desired specifications of the robotic simulator to be used around WBAI. ==== Simulation Environment ==== === Simulator === Currently a prototype is being developed with [[http://pygazebo.readthedocs.org/|PyGazebo]]. === Control environment ==== Robots are to be controlled with [[BriCA]] / [[http://www.goodai.com/#!brain-simulator/c81c|Brain Simulator]]<sup>TM</sup> from outside of the simulator.\\ Recommended controlling language is Python (as it is easy to use & low in platform dependency). === Task environment ==== With regard to rodent-level intelligence, mazes for behavioral tests are to be implemented.\\ As for task environment for human-level intelligence, the simulation environment for RoboCup@Home will be considered. Currently, their referential environment is implemented with [[http://www.sigverse.org/wiki/en/|SigVerse]].\\ If one is to contest in a Robocup league, s/he would have to use SigVerse. However, as the simulator we use for our prototype is PyGazebo, we might propose them to use PyGazebo in future Robocup... ==== Robot (Overview) ==== The body shape of a simulated robot may be:\\ * Two-wheel turtle (good enough for rodent-level) * ‘Centurtle’: with a turtle-like lower body and humanoid upper body (as used in the Robocup@Home Simulation League) * Bipedal humanoid It is desirable that a simulated robot has the following functions:\\ (See Input and Output below for detail.)\\ * Visual perception * Reward (external-internal) * [Straight/Rotational] Acceleration and speed sensors * Audio perception (optional) * Locomotion * Manipulator (optional) * Text input and output (optional) * Emotional expression (optional) * Speech function (optional) ==== Input ==== It is desirable that a simulated robots has the following input functions. * Visual perception * Color perception\\ While animals do not always have color vision, for the engineering purpose, it is easier to process visual information with color. * Depth perception\\ While animals do not always have stereo vision, for the engineering purpose, it is easier to process visual information with depth. * Reward * External reward is given when, for example, the robot gets a specific item (bait). * Internal reward is given by internal logic when, for example, curiosity is satisfied. * Tactile perception (optional) * A manipulator (optional) requires it. * Acceleration sensor [Straight/Rotational] (optional) * Speed sensor [Straight/Rotational] (optional)\\ Not recommended as it is not biologically realistic.\\ It is rather recommended to estimate the speed from vision.\\ The method for [[https://en.wikipedia.org/wiki/Odometry|odometry]] is to be determined. * Audio perception (optional)\\ Required when the task requires audio-based language processing.\\ (See **Auditory information processing** below) * Text input (optional) ==== Output ==== It is desirable that a simulated robots has the following output functions. * Locomotion Default: L/R bi-wheel Challenger’s option: N-pedal walker * Manipulator (optional) As a minimal way for object manipulation, a robot can move exterior objects by pushing them with its own body. * Vocalization (optional) One of the following: * Text-to-speech (parser and word-phonetic dictionary) * Phoneme vocalizer (Phoneme sets are language dependent.) * General-purpose sound synthesizer * Text output (optional) * Emotional expression (optional)\\ Robot with social interaction may require it. ==== Perception API ==== While perceptual information processing may be implemented with machine learning algorithms, when it is not the main subject of research, it would be easier to use off-the-shelf libraries. With the simulator, some information may be obtained ‘by cheating’ directly from the simulation environment.\\ APIs are to be wrapped for access from BriCA / BrainSimulatorTM. * Visual information processing The following are served: * Deep Learning APIs for visual information processing * Image processing API such as OpenCV / SimpleCV And the following may be served: * Object detection API (with depth info.) * Border Ownership / Figure Ground Separation API Apparent size, relative direction, distance, relative velocity, etc. * Face detection API (optional) * Human skeletal mapping API (as seen in Kinect) (optional) * Facial expression recognition API (adapted to the facial expression of robots) (optional) * Auditory information processing (optional) * Sound and speech recognition API\\ Function such as seen in Julius / HARK robot_simulator.1451357405.txt.gz Last modified: 2015/12/29 11:50by n.arakawa