Article link to read the full comparison between FastAI, JAX, Keras, MXNet, PaddlePaddle, Pytorch and Pytorch-lightning:
- ease of implementation (user friendly coding, ease of finding information online, etc.),
- time per epoch for the same model and the same training parameters,
- memory and GPU usage (thanks to pytorch-profiler),
- accuracy obtained after the same training.
compare_framework_colab.ipynb notebook allows to run a CIFAR10 training using any frameworks. Only the framework_name variable should be updated to switch between frameworks.