Proposed by Cerenaut and the Whole Brain Architecture Initiative
(What is a Request for Research/RFR?)
Background
There is potential to extend Machine Learning to more human-like capabilities with the introduction of episodic memory that enables one-shot learning and memory replay (to consolidate memories for long term storage without catastrophic interference). Algorithms that are inspired by the hippocampus provide a promising approach, a recent example is AHA (Kowadlo 2019).
AHA and other one-shot learning approaches are built on the principle of learning new combinations of concepts. This relies on the fact that the concepts are factorised i.e. that they express different components of the dataset. Factorised representations is a goal of disentangled representations, a concept popularised by Bengio 2013 and pioneered with seminal work on β-VAEs by Higgins 2017.
Would a β-VAE be an easy way to boost the ability of one-shot learning in systems such as AHA?
Aim and Outline
The aim of this project is to implement a beta-VAE as the bridge into AHA (or other one-shot learning implementation) and test whether it is improves performance. Assuming it does, the next step will be to extend experiments to more difficult datasets. A stretch goal will be to modify k-sparse autoencoders (Makhzani 2014, 2015) to create more disentangled representations.
The project would be co-supervised by Project AGI, an Australian machine-learning startup. There is also an opportunity for collaboration with research groups in Japan via the Whole-Brain Architecture Initiative (WBAI) and Luria Neuroscience Institute.
Status
Open
A project was conducted as a Monash uni Masters project in 2021.
The disentangled representations did not improve composability. Hypothesis is that sparse representations are already disentangled.
URLs and References
Bengio, Yoshua, Courville, Aaron, and Vincent, Pascal. Representation learning: A review and new perspectives. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(8):1798–1828, 2013.
Makhzani, Alireza and Frey, Brendan. k-Sparse Autoencoders. 2014.
Makhzani, Alireza and Frey, Brendan J. Winner-Take-All Autoencoders. In Advances in Neural Information Processing Systems, 2015.
Higgins, Irina, Matthey, Loic, Pal, Arka, Burgess, Christopher, Glorot, Xavier, Botvinick, Matthew, Mohamed, Shakir, Lerchner, Alexander, and Deepmind, Google. β-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework. In ICLR, 2017.
Kowadlo, G., Ahmed, A., & Rawlinson, D. (2019). AHA! an “Artificial Hippocampal Algorithm” for Episodic Machine Learning. Retrieved from http://arxiv.org/abs/1909.10340