Machine Learning benchmarking is considered a difficult task mainly because there are a lot of flexible components unlike typical CPU benchmarking. By using MLCommons CK our aim is to make this difficult task as easy as possible. Over the previous rounds of MLPerf submissions more than 50% of the submissions are powered by CK and we are trying to make even more submissions during the next round (around January 2023). If you are confused what this is all about just try the below commands and if you are curious to know more then join our Telegram group.

Those interested in modular ML benchmarking can also join our WG for weekly conf-calls.

PS: Only commonsense and a system with 100 GB or more disk space is the requirement. The benchmarks should run on almost any laptop or desktop and even a Raspberry Pi. They are tested on Linux and MacOS.

python3 -m pip install cmind
cm pull repo [email protected]
cm run script --tags=app,mlperf,_resnet50,_onnxruntime  --env.OUTPUT_DIR=$HOME/final_results \
--env.CM_LOADGEN_MODE=performance --env.CM_LOADGEN_SCENARIO=Offline \
--add_deps.loadgen.version=r2.1 --add_deps_recursive.inference-src.tags=_octoml 
cm run script --tags=app,mlperf,_resnet50,_onnxruntime  --env.OUTPUT_DIR=$HOME/final_results \
--env.CM_LOADGEN_MODE=accuracy --env.CM_LOADGEN_SCENARIO=Offline \
--add_deps.loadgen.version=r2.1 --add_deps_recursive.inference-src.tags=_octoml 

 

posted in From GO Admins Aug 28 edited Sep 1 by
by
433 views
1
Like
0
Love
0
Haha
0
Wow
0
Angry
0
Sad