Signed-off-by: Nave Assaf <nassaf@nvidia.com> Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com> Signed-off-by: yechank <161688079+yechank-nvidia@users.noreply.github.com> Signed-off-by: Balaram Buddharaju <169953907+brb-nv@users.noreply.github.com> Signed-off-by: Iman Tabrizian <10105175+tabrizian@users.noreply.github.com> Signed-off-by: qqiao <qqiao@nvidia.com> Signed-off-by: Superjomn <328693+Superjomn@users.noreply.github.com> Signed-off-by: Bo Deng <deemod@nvidia.com> Signed-off-by: Jin Li <59594262+liji-nv@users.noreply.github.com> Signed-off-by: Yifei Zhang <219273404+yifeizhang-c@users.noreply.github.com> Signed-off-by: Amit Zuker <203509407+amitz-nv@users.noreply.github.com> Signed-off-by: Erin Ho <14718778+hchings@users.noreply.github.com> Signed-off-by: Chenfei Zhang <chenfeiz@nvidia.com> Signed-off-by: Christina Zhang <83400082+ChristinaZ@users.noreply.github.com> Signed-off-by: Venky Ganesh <23023424+venkywonka@users.noreply.github.com> Signed-off-by: Pamela <179191831+pamelap-nvidia@users.noreply.github.com> Signed-off-by: Hui Gao <huig@nvidia.com> Signed-off-by: Alexandre Milesi <30204471+milesial@users.noreply.github.com> Signed-off-by: Shixiaowei02 <39303645+Shixiaowei02@users.noreply.github.com> Signed-off-by: Michal Guzek <mguzek@nvidia.com> Signed-off-by: peaceh <103117813+peaceh-nv@users.noreply.github.com> Signed-off-by: nv-guomingz <137257613+nv-guomingz@users.noreply.github.com> Signed-off-by: Wanli Jiang <35160485+Wanli-Jiang@users.noreply.github.com> Signed-off-by: Patrice Castonguay <55748270+pcastonguay@users.noreply.github.com> Signed-off-by: ruodil <200874449+ruodil@users.noreply.github.com> Signed-off-by: Linda-Stadter <57756729+Linda-Stadter@users.noreply.github.com> Signed-off-by: Yuxian Qiu <142763828+yuxianq@users.noreply.github.com> Signed-off-by: Jiagan Cheng <jiaganc@nvidia.com> Signed-off-by: William Zhang <133824995+2ez4bz@users.noreply.github.com> Signed-off-by: Dom Brown <3886319+DomBrown@users.noreply.github.com> Co-authored-by: Nave Assaf <55059536+Naveassaf@users.noreply.github.com> Co-authored-by: Yechan Kim <161688079+yechank-nvidia@users.noreply.github.com> Co-authored-by: brb-nv <169953907+brb-nv@users.noreply.github.com> Co-authored-by: Iman Tabrizian <10105175+Tabrizian@users.noreply.github.com> Co-authored-by: Emma Qiao <qqiao@nvidia.com> Co-authored-by: Yan Chunwei <328693+Superjomn@users.noreply.github.com> Co-authored-by: Bo Deng <deemod@nvidia.com> Co-authored-by: Jin Li <59594262+liji-nv@users.noreply.github.com> Co-authored-by: yifeizhang-c <219273404+yifeizhang-c@users.noreply.github.com> Co-authored-by: amitz-nv <203509407+amitz-nv@users.noreply.github.com> Co-authored-by: Erin <14718778+hchings@users.noreply.github.com> Co-authored-by: chenfeiz0326 <chenfeiz@nvidia.com> Co-authored-by: ChristinaZ <83400082+ChristinaZ@users.noreply.github.com> Co-authored-by: Venky <23023424+venkywonka@users.noreply.github.com> Co-authored-by: Pamela Peng <179191831+pamelap-nvidia@users.noreply.github.com> Co-authored-by: HuiGao-NV <huig@nvidia.com> Co-authored-by: milesial <milesial@users.noreply.github.com> Co-authored-by: Shi Xiaowei <39303645+Shixiaowei02@users.noreply.github.com> Co-authored-by: Michal Guzek <moraxu@users.noreply.github.com> Co-authored-by: peaceh-nv <103117813+peaceh-nv@users.noreply.github.com> Co-authored-by: Guoming Zhang <137257613+nv-guomingz@users.noreply.github.com> Co-authored-by: Wanli Jiang <35160485+Wanli-Jiang@users.noreply.github.com> Co-authored-by: pcastonguay <55748270+pcastonguay@users.noreply.github.com> Co-authored-by: ruodil <200874449+ruodil@users.noreply.github.com> Co-authored-by: Linda <57756729+Linda-Stadter@users.noreply.github.com> Co-authored-by: Zhanrui Sun <184402041+ZhanruiSunCh@users.noreply.github.com> Co-authored-by: Yuxian Qiu <142763828+yuxianq@users.noreply.github.com> Co-authored-by: Jiagan Cheng <jiaganc@nvidia.com> Co-authored-by: William Zhang <133824995+2ez4bz@users.noreply.github.com> Co-authored-by: Larry <197874197+LarryXFly@users.noreply.github.com> Co-authored-by: Sharan Chetlur <116769508+schetlur-nv@users.noreply.github.com> Co-authored-by: Dom Brown <3886319+DomBrown@users.noreply.github.com> |
||
|---|---|---|
| .. | ||
| __init__.py | ||
| allowed_configs.py | ||
| base_perf_pytorch.csv | ||
| base_perf.csv | ||
| build.py | ||
| create_perf_comparison_report.py | ||
| data_export.py | ||
| data.py | ||
| diff_tools.py | ||
| gpu_clock_lock.py | ||
| misc.py | ||
| pytorch_model_config.py | ||
| README_release_test.md | ||
| README.md | ||
| requirements.txt | ||
| sanity_perf_check.py | ||
| session_data_writer.py | ||
| test_perf.py | ||
| utils.py | ||
Sanity Perf Check Introduction
Background
"Sanity perf check" is a mechanism to detect performance regressions in the L0 pipeline.
The tests defined in l0_perf.yml are the ones that are required to pass for every PR before merge.
base_perf.csv
The baseline for performance benchmarking is defined at base_perf.csv - this file contains the metrics that we verify regression on between CI runs.
This file contains records in the following format:
perf_case_name metric_type perf_metric threshold absolute_threshold
To allow for some machine dependent variance in performance benchmarking we also define a threshold and an absolute_threshold. This ensures we do not fail on results that reside within legitimate variance thresholds.
threshold is relative.
CI
As part of our CI, the test_perf.py collects performance metrics for configurations defined in l0_perf.yml. This step outputs a perf_script_test_results.csv containing the metrics collected for all configurations.
After this step completes, the CI will run sanity_perf_check.py. This script will make sure that all differences in metrics from the run on this branch is within a designated threshold of the baseline (base_perf.csv).
There're 4 possible results for this:
- The current HEAD impact on the performance for our setups is within accepted threshold - the perf check will pass w/o exception.
- The current HEAD introduces a new setup/metric in
l0_perf.ymlor removes some of them. This will result in new metrics collected bytest_perf.pywhich will failsanity_perf_check.py. This requires an update forbase_perf.csv. - The current HEAD improves performance for at least one metric by more than the accepted threshold, which will fail
sanity_perf_check.py. This requires an update forbase_perf.csv - The current HEAD introduces a regression to one of the metrics that is over the accepted threshold, which will fail
sanity_perf_check.py. This will require to fix the current branch and rerun the pipeline.
Updating base_perf.csv
If a CI run fails sanity_perf_check.py, it will upload a patch file as an artifact. This file can be applied to current branch using git apply <patch_file>.
This patch will only update the metrics that had a difference which was over the accepted threshold. The patch will also remove/add metrics according to the removed or added tests.
Running locally
Given a target_perf_csv_path you can compare it to another perf csv file.
First make sure you install the dependencies:
pip install -r tests/integration/defs/perf/requirements.txt
Then, you can run it with:
sanity_perf_check.py <target_perf_csv_path> <base_perf_csv_path>
** In the CI, <base_perf_csv_path> is the base_perf.csv file path mentioned above.
Running this print the diffs between both performance results. It presents only:
- Metrics that have a diff bigger than the accepted threshold.
- Metrics missing in
base_perf_csv. - Metrics missing in
target_perf_csv.
If any diffs were found it will also generate a patch file to change base_perf_csv with the new metrics, it will be written to the same directory as <target_perf_csv_path> resides in.
Generating diff report
To view the difference between performance reports, it is possible to generate a pdf report containing Bar graphs comparing the perf metric value per-metric. Each metric will contain comparison bars per configuration.
For example: If we run the script with 3 files and test 2 configurations per metric, we will have 2 groups of 3 bars - A group per-configuration, each group containing the 3 performance metrics reported in the 3 files.
To generate this report:
python tests/integration/defs/perf/create_perf_comparison_report.py --output_path=<output_path_for_report> --files <csv file paths separated by spaces>
This will create a pdf file at <output_path_for_report>.