Month: June 2020
- Represent signals as functions instead multidimensional data (not a completely new idea as the authors pointed out)
- Match not just the signal itself but also its derivative
- Use sinsoidal activation function. This allows derivative of the network is still well-defined (and still a siren).
The proposed learning paradigm:
- Self-supervised pretraining
- Supervised finetuning
- Distillation: train a student to learn the output of the teacher rather than the true label.
It seems to have a rather counterintuitive conclusion. Labeled data do not always help. Or too many labeled data used during training does not help.
A very nice presentation (as always) of CornerNet by Yannic Kilcher. The key ideas are push-pull losses of corner embeddings and corner pooling. The ideas are simple and intuitive but very well excecuted. The authors have include ablation study for the gain of corner pooling. Their result is competitive with other 1 stage approach, better than YoloV3 but worse than YoloV4. And they also tested that when ground truth corner detections were used, their AP almost doubles. This illustrated that corner detection is their main bottleneck.