Radioactivate data tracing through training

Motivation

  • Want to verify if someone has used your dataset for training

Idea

  • Introduce extra (unreal) feature to data. For example, cat feature to cat image, dog feature to dog image
  • Verify if dataset was used with hypothesis testing

Comments

  • It seems that it is training and testing the same classifier. If a different classifier is used to the marked data, not sure if the method will actually work
  • The “radioactive” images actually look very bad
  • The idea does not seem to be new. And the execution is quite doubtful. It reminds me watermarking techniques popular in early 2000 but it never seems to take off since it doesn’t really work.

Ref

Meta-Learning through Hebbian Plasticity in Random Networks

Motivation:

  • Most ML moderns become statistics once they are trained. Or one has to retrain the model to keep it adaptive to the new environment.

Main idea:

  • Instead of updating weights directly, update the rules (determined by the Hebbian coefficients) that update the weights

Some details:

  • Weights are updated by Hebbian ABCD model, $latex \Delta w_{i,j} = \eta_w (A_w o_i o_j + B_w o_i +C_w o_j + D_w)$
  • Let $latex bf h$ be the vectorize Hebbian coefficients, $latex {\bf h}_{t+1} \leftarrow h_t + \frac{\alpha} {n \sigma} \sum_{i=1}^n F_i ({\bf h}_t + \Delta {\bf h}_i) $, where $latex \Delta {\bf h}_i \sim  \mathcal{N} ({\bf 0}, \sigma{\bf I})$ and  $latex F_i$ is a fitnness evalution of $latex {\bf h}_t + \Delta {\bf h}_i$ (This I am not completely certained how it is evaluated)

Ref: video, paper, twitter posts

Reformer

Problem statement:

  • Transformer is computationally expensive when we have too many keys and queries (need to compute dot product between each pair). It can be memory intensive as well depending on implementations.

Proposed solution:

  • Group keys and queries into “buckets” first and only compute dot product of keys and queries in the same bucket.
  • Bucket can be implemented as a form of “locality sensitive hashing”. Seriously, I always forget what LSH means. Can they create an even more obscure jargon? (fairly speaking, I think they definitely could)
    • I think a simple example to under LSH is the binary case. Assumes two binary vector is close according to the Hamming distance. Then, for a say length-100 binary vector, we can sort them into 4 bins just according to the value of the first two bits. Actually, it is just coset and syndrome coding but with the parity check matrix “degenerated” (care nothing but just for the first two bits). If we assume the bits are independent, this degenerated choice is okay. Otherwise, we can just use a random coset as in Slepian Wolf coding.
    • For the reformer case, LSH is implemented as projections of the keys and queries onto random vectors. The signs of the resulting projections will determine the bucket. This is exactly the generalization Slepian Wolf coding to continuous case.
  • Since they don’t want to store the bucket projection vectors, they make the step reversible instead. It is basically the same as the lifting scheme in wavelet construction.

Ref: Video, code, paper

BYOL

It is kind of mysterious that this works without using negative samples for self learning. See video and paper

  • The main idea is to train a representation network and a classifier so that the latter will predict the representation of an augmented data input.
  • The representation network for the augmented data has moving average parameter of the current representation. Similar tricks have been used in deep reinforcement learning
  • It is indeed quite surprising that this works without negative samples. Because there is nothing in the above model that avoids converging to trivial solution (everything maps to a constant)
  • Experimental results look good. But also may not be accounted for too much. Their implementation for some older approaches have way higher prediction performance. And they pulled numbers from papers (reasonable tho) for comparison. Approach is probably on par and without negative samples, they can train with a smaller batch size
  • They are using 512 TPUs for training for 7 hours…

Linformer

video and paper.

Remarks:

  • Project embedding to lower dimension to save computational complexity and space
  • Some gain in speed but doesn’t look too significant. Tradeoff in performance seems larger than claimed
  • Theorem 1 based on JL-lemma did not used properties of attention itself. It seems that the same argument can be used to anywhere (besides attention). The theorem itself seems to be a bit a stretch
  • With the same goal of speeding up transformer, the “kernelized transformer” appears to be a better work

SIREN

A comeback of signal processing? A video from authors and from Yannic, and the arxiv paper, and a nice website. The 3d reconstruction is very convincing. btw, another video presentation of implicit representation.

Some remarks:

  1. Represent signals as functions instead multidimensional data (not a completely new idea as the authors pointed out)
  2. Match not just the signal itself but also its derivative
  3. Use sinsoidal activation function. This allows derivative of the network is still well-defined (and still a siren).

 

Copyright OU-Tulsa Lab of Image and Information Processing 2024
Tech Nerd theme designed by Siteturner