Meta-Learning through Hebbian Plasticity in Random Networks

Motivation:

  • Most ML moderns become statistics once they are trained. Or one has to retrain the model to keep it adaptive to the new environment.

Main idea:

  • Instead of updating weights directly, update the rules (determined by the Hebbian coefficients) that update the weights

Some details:

  • Weights are updated by Hebbian ABCD model, \Delta w_{i,j} = \eta_w (A_w o_i o_j + B_w o_i +C_w o_j + D_w)
  • Let bf h be the vectorize Hebbian coefficients, {\bf h}_{t+1} \leftarrow h_t + \frac{\alpha} {n \sigma} \sum_{i=1}^n F_i ({\bf h}_t + \Delta {\bf h}_i) , where \Delta {\bf h}_i \sim  \mathcal{N} ({\bf 0}, \sigma{\bf I}) and  F_i is a fitnness evalution of {\bf h}_t + \Delta {\bf h}_i (This I am not completely certained how it is evaluated)

Ref: video, paper, twitter posts

Copyright OU-Tulsa Lab of Image and Information Processing 2020
Tech Nerd theme designed by Siteturner