Proposed by Cerenaut and the Whole Brain Architecture Initiative
(What is a Request for Research/RFR?)
Background
The vast majority of artificial neural networks have a fixed structure, defined by researchers using a combination of past experience and empirical optimization. Inter-layer connectivity and layer sizing does not vary during training. In many natural neural networks such as the neocortex, neurons are physically organized as one flat “sheet”, but are connected in a manner that produces a hierarchical (tree-like) structure. We observe that small-scale microcircuitry and gross patterns of organization are preserved between individuals. But between these extremes, there is considerable “plasticity” in the tree – experience determines the allocation of neurons to different regions within the hierarchy. Allowing training data to optimize both the structure and weights of a network could greatly enhance its performance.
An initial investigation was completed as Part 1 of this RFR (in a Monash University Masters project, link coming soon). A self-organising network was successfully implemented. The learning rules enabled a single-layer recurrent artificial neural network to form effective hierarchical structures. The codebase is available for extensions in this year’s project.
Aim and Outline
Now with a functioning solid base, there are a variety of exciting extensions possible. This project aims to extend the first phase in the following possible directions:
- Modify the network so that it is convolutional and can be used on more complex datasets
- Multimodal, test if self-organisation is superior to integrating across sensing modalities (part of the original inspiration)
- The extent of self-organisation is currently limited to the receptive field location. We want to extend this to receptive field size and potentially other parameterisations
- The available receptive field surface area is currently randomly organised. Look at the effects of some level of imposed structure
- Explore other intrinsic metrics and meta-learning rules for improved optimization
Status
Open