robot_simulator
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
robot_simulator [2015/12/29 11:51] – [Input] n.arakawa | robot_simulator [2018/12/29 10:21] (current) – [Robot Simulator] n.arakawa | ||
---|---|---|---|
Line 1: | Line 1: | ||
===== Robot Simulator ===== | ===== Robot Simulator ===== | ||
- | Robotic simulators can be used as virtual environment in which AGI systems are trained and evaluated. | + | Robotic simulators can be used as virtual environment in which AGI (wannabe) |
- | This page describes desired specifications of the robotic simulator to be used around WBAI. | + | This page describes desired specifications of the robotic simulator to be used around WBAI.\\ Please also check our request for research: [[https:// |
==== Simulation Environment ==== | ==== Simulation Environment ==== | ||
+ | A sample environment with PyGazebo and an agent controlled with BriCA, Brain Simulator< | ||
+ | [[https:// | ||
=== Simulator === | === Simulator === | ||
- | Currently a prototype is being developed with [[http:// | + | Currently a prototype is being developed with [[http:// |
=== Control environment ==== | === Control environment ==== | ||
- | Robots are to be controlled with [[BriCA]] | + | Robots are to be controlled with [[BriCA]], [[http:// |
Recommended controlling language is Python (as it is easy to use & low in platform dependency). | Recommended controlling language is Python (as it is easy to use & low in platform dependency). | ||
=== Task environment ==== | === Task environment ==== | ||
- | With regard to rodent-level intelligence, | + | With regard to rodent-level intelligence, |
- | As for task environment for human-level intelligence, | + | As for task environment for human-level intelligence, |
- | If one is to contest in a Robocup league, s/he would have to use SigVerse. | + | |
However, as the simulator we use for our prototype is PyGazebo, we might propose them to use PyGazebo in future Robocup... | However, as the simulator we use for our prototype is PyGazebo, we might propose them to use PyGazebo in future Robocup... | ||
==== Robot (Overview) ==== | ==== Robot (Overview) ==== | ||
The body shape of a simulated robot may be:\\ | The body shape of a simulated robot may be:\\ | ||
* Two-wheel turtle (good enough for rodent-level) | * Two-wheel turtle (good enough for rodent-level) | ||
- | * ‘Centurtle’: | + | * ‘Centurtle’: |
* Bipedal humanoid | * Bipedal humanoid | ||
It is desirable that a simulated robot has the following functions: | It is desirable that a simulated robot has the following functions: | ||
Line 31: | Line 32: | ||
* Speech function (optional) | * Speech function (optional) | ||
==== Input ==== | ==== Input ==== | ||
- | It is desirable that a simulated | + | It is desirable that a simulated |
* Visual perception | * Visual perception | ||
* Color perception\\ While animals do not always have color vision, for the engineering purpose, it is easier to process visual information with color. | * Color perception\\ While animals do not always have color vision, for the engineering purpose, it is easier to process visual information with color. | ||
* Depth perception\\ While animals do not always have stereo vision, for the engineering purpose, it is easier to process visual information with depth. | * Depth perception\\ While animals do not always have stereo vision, for the engineering purpose, it is easier to process visual information with depth. | ||
* Reward | * Reward | ||
- | * External reward is given when, for example, the robot gets a specific item (bait). | + | * External reward is given when, for example, the robot gets a specific item (e.g., bait). |
* Internal reward is given by internal logic when, for example, curiosity is satisfied. | * Internal reward is given by internal logic when, for example, curiosity is satisfied. | ||
* Tactile perception (optional) | * Tactile perception (optional) | ||
Line 45: | Line 46: | ||
* Text input (optional) | * Text input (optional) | ||
==== Output ==== | ==== Output ==== | ||
- | It is desirable that a simulated | + | It is desirable that a simulated |
* Locomotion | * Locomotion | ||
- | Default: | + | * Default: |
- | Challenger’s option: N-pedal walker | + | |
- | * Manipulator (optional) | + | * Manipulator (optional)\\ As a minimal way for object manipulation, |
- | As a minimal way for object manipulation, | + | * Vocalization (optional)\\ One of the following: |
- | * Vocalization (optional) | + | |
- | One of the following: | + | |
* Text-to-speech (parser and word-phonetic dictionary) | * Text-to-speech (parser and word-phonetic dictionary) | ||
* Phoneme vocalizer (Phoneme sets are language dependent.) | * Phoneme vocalizer (Phoneme sets are language dependent.) | ||
* General-purpose sound synthesizer | * General-purpose sound synthesizer | ||
* Text output (optional) | * Text output (optional) | ||
- | * Emotional expression (optional)\\ | + | * Emotional expression (optional)\\ |
==== Perception API ==== | ==== Perception API ==== | ||
While perceptual information processing may be implemented with machine learning algorithms, when it is not the main subject of research, it would be easier to use off-the-shelf libraries. | While perceptual information processing may be implemented with machine learning algorithms, when it is not the main subject of research, it would be easier to use off-the-shelf libraries. | ||
- | APIs are to be wrapped for access from BriCA / BrainSimulatorTM. | + | APIs are to be wrapped for access from BriCA / BrainSimulator< |
* Visual information processing | * Visual information processing | ||
- | The following are served: | + | The following are to be served: |
* Deep Learning APIs for visual information processing | * Deep Learning APIs for visual information processing | ||
* Image processing API such as OpenCV / SimpleCV | * Image processing API such as OpenCV / SimpleCV | ||
- | And the following may be served: | + | And the following may be served |
- | * Object detection API | + | * Object detection API\\ (utilizing |
- | (with depth info.) | + | * Border Ownership / Figure Ground Separation API |
- | * Border Ownership / Figure Ground Separation API | + | * Measuring the apparent |
- | Apparent | + | * Face detection API |
- | * Face detection API (optional) | + | * Human skeletal mapping API (as found in Kinect |
- | * Human skeletal mapping API (as seen in Kinect) (optional) | + | * Facial expression recognition API (adapted to the facial expression of robots) |
- | * Facial expression recognition API (adapted to the facial expression of robots) (optional) | + | * Auditory information processing |
- | * Auditory information processing | + | * Sound and speech recognition API\\ Functions |
- | * Sound and speech recognition API\\ Function | + |
robot_simulator.1451357501.txt.gz · Last modified: 2015/12/29 11:51 by n.arakawa