Preprint - https://arxiv.org/abs/2009.08449
Published - In AAAI 2021 Proceedings
Code and appendix - Paper1 directory
TL;DR - Explore the decision landscapes generated by soft-label k-Nearest Neighbors classifiers in the 'less than one'-shot learning setting.
Press coverage - LO-Shot Learning has received significant press coverage.
Preprint - https://arxiv.org/abs/2011.00228
Published - In PeerJ Computer Science
Code - Paper2 directory
TL;DR - Design optimal 1-NN prototypes even in pathological cases where most prototype methods fail.
Preprint - https://arxiv.org/abs/2102.07834
Published - In IJCNN 2021 Proceedings
Code - Paper3 directory
TL;DR - Represent your training dataset with fewer prototypes than even the number of classes found in the data.
Preprint - https://arxiv.org/abs/2202.04670
Published - In CogSci 2022 Proceedings
Code - LOSLP directory
TL;DR - Humans can also do less-than-one-shot learning.
Preprint - https://arxiv.org/abs/1910.02551v3
Code - https://github.com/ilia10000/dataset-distillation
TL;DR - Experiments with soft-label dataset distillation (an algorithm for generating small synthetic datasets that train neural networks to the same performance as when training on the original data) provided the first evidence of LO-Shot Learning in neural networks.