Blog

The Fifth WBA ‘Working Memory’ Hackathon – A Retrospective

The fifth WBA hackathon featuring working memory was concluded at the end of this October (it was held online on CodaLab).  While 15 teams participated, there was no submission of the result due to factors such as difficulties in online communication.

Here we look back at the task and think of possible directions for hackathon development based on an interview with participants.

The Task

A Match-to-Sample task environment was given in the hackathon.  The match-to-sample task is a task to judge whether objects presented are the same.  In the delayed match-to-sample task, the judgment must be done after the sample figure disappears from the screen so that working memory is required.

Hackathon participants had to create an intelligent agent with a brain-inspired architecture to solve match-to-sample tasks, while modifying a given sample code.

Difficulties

Figure Recognition

The agent had to recognize figures used in the hackathon to solve the tasks.  Operations such as rotation and expansion were applied to the figures to be compared, so that the agent had to perform invariant object identification.

Active Vision

In the environment, the agent could see only part of the scenes and had to learn to move its ‘gaze’ to look at presented figures (active vision).  While this feature was introduced for the sake of reality to mimic human vision, it could have posed a major difficulty.  Besides, the agent had to use working memory to solve non-delayed match-to-sample tasks with active vision because figures to be compared cannot be seen simultaneously.

Task Recognition

There were a variety of tasks in the hackathon and the agent had to recognize which task (game) was taking place.  The shape of figures was to be compared in a task and the color of figures in another.  In some tasks, a bar was put around the figure and the position of the bar was to be compared.  The agent had to learn types of tasks and recognize the types during the exemplar session.  The variety was introduced to give a generality to the entire tasks.

Multiple Learning

From the specification of the task, the agent had to learn:

  • to determine if the presented figures were the same in the feature to be attended to in the task based on the content of working memory
  • to control (hold/release) working memory contents so that they would help figure comparison
  • to move ‘gaze’ properly to a figure relevant in the scene
  • to recognize types of figures (with changes in size and rotation) and bar positions
  • to recognize the types of tasks

It is known to be difficult to create an agent combining kinds of learning.

Other Difficulties

In the hackathon, the participants had to learn ideas in neuroscience/cognitive science and machine learning (frameworks) if they hadn’t.  As there is no standing neuroscientific working memory model, the participants had to also come up with a model.  Moreover, participants had difficulty in grasping the constitution of the sample code.

Future Directions

WBAI is developing whole-brain reference architecture while promoting the development of human-like AGI.  In our methodology, regions of interest (ROIs) for various cognitive tasks are identified, and the function of neural circuits in the ROIs is described, implemented, and tested.  The methodology can be applied to the cognitive tasks required in the hackathon. By assuming and implementing a common mechanism for a brain region used in multiple tasks, the generality of the whole-brain architecture will be ensured.

While there was no final submission, we heard from participants that they had learnt things in the hackathon.  We would like to think of new events where more people can participate in ‘the open development of Whole Brain Architecture’ based on the consideration above.

Finally, we thank the hackathon participants and those interested and look forward to further collaboration!

Cerenaut and WBAI