Proposed by Cerenaut and the Whole Brain Architecture Initiative
(What is a Request for Research/RFR?)
There is potential to extend Machine Learning to more human-like capabilities with the introduction of episodic memory that enables one-shot learning and memory replay (to consolidate memories for long term storage without catastrophic interference).
Algorithms that are inspired by the hippocampus provide a promising approach, a recent example is AHA (Kowadlo 2019). AHA uses an autoassociative memory based on the well known Hopfield network. However, autoassociative memories are generally limited by the following two constraints:
Autoassociative memories can have many other applications, and removing these constraints would have a big impact.
Not a lot of research has addressed these constraints, a good summary is given by Shen 2010. These projects use very different approaches such as SOMs (Yamada 1999), growing networks (Sudo 2009), multi layer networks fully connected ann’s (Shen 2010) or multiple Hopfield networks (Maurer 2005).
A promising model of the autoassociative component of the Hippocampus gives inspiration for improving a Hopfield network (Jensen 1996a, 1996b).
The aim of this project is to extend a Hopfield network:
The proposed approaches are:
Successful execution of this project would likely result in publishable work.
Jensen, O., & Lisman, J. E. (1996). Theta/gamma networks with slow NMDA channels learn sequences and encode episodic memory: Role of NMDA channels in recall. Learning Memory, 3(2–3), 264–278. https://doi.org/10.1101/lm.3.2-3.264
Jensen, O., & Lisman, J. E. (1996). Hippocampal CA3 region predicts memory sequences: Accounting for the phase precession of place cells. Learning Memory, 3(2–3), 279–287. https://doi.org/10.1101/lm.3.2-3.279
Kowadlo, G., Ahmed, A., & Rawlinson, D. (2019). AHA! an “Artificial Hippocampal Algorithm” for Episodic Machine Learning. Retrieved from http://arxiv.org/abs/1909.10340
Maurer, A., Hersch, M., & Billard, A. G. (2005). Extended Hopfield network for sequence learning: Application to gesture recognition. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 3696 LNCS, 493–498. https://doi.org/10.1007/11550822_77
Shen, F., Yu, H., Kasai, W., & Hasegawa, O. (2010). An associative memory system for incremental learning and temporal sequence. Proceedings of the International Joint Conference on Neural Networks, 1–8. https://doi.org/10.1109/IJCNN.2010.5596780
Akihito Sudo, Akihiro Sato, Osamu Hasegawa (2009), Associative Memory for Online Learning in Noisy Environments Using Self-Organizing Incremental Neural Network, Published in IEEE Transactions on Neural
Takeo Yamada, Motonobu Hattori, Hiroshi Ito (1999), Sequential learning for associative memory using Kohonen feature map, IJCNN’99