-
Story
-
Resolution: Done
-
Normal
-
None
-
None
-
False
-
False
-
PSAP Sprint 221, PSAP Sprint 222
-
undefined
https://github.com/neuralmagic/sparseml
SparseML has libraries for applying sparsification recipes to neural networks with a few lines of code, enabling faster and smaller models
Goal of this work is to research if relevant mlperf benchmarks can be sparsified using this technique and compare their performance on x86 CPUs, GPUs, and ARM CPUs. First on bare metal. And then on OpenShift/RHODS.