Meta-Learning through Hebbian Plasticity in Random Networks

Motivation:

  • Most ML moderns become statistics once they are trained. Or one has to retrain the model to keep it adaptive to the new environment.

Main idea:

  • Instead of updating weights directly, update the rules (determined by the Hebbian coefficients) that update the weights

Some details:

  • Weights are updated by Hebbian ABCD model, $latex \Delta w_{i,j} = \eta_w (A_w o_i o_j + B_w o_i +C_w o_j + D_w)$
  • Let $latex bf h$ be the vectorize Hebbian coefficients, $latex {\bf h}_{t+1} \leftarrow h_t + \frac{\alpha} {n \sigma} \sum_{i=1}^n F_i ({\bf h}_t + \Delta {\bf h}_i) $, where $latex \Delta {\bf h}_i \sim  \mathcal{N} ({\bf 0}, \sigma{\bf I})$ and  $latex F_i$ is a fitnness evalution of $latex {\bf h}_t + \Delta {\bf h}_i$ (This I am not completely certained how it is evaluated)

Ref: video, paper, twitter posts

Leave a Reply

Copyright OU-Tulsa Lab of Image and Information Processing 2024
Tech Nerd theme designed by Siteturner