TensorRT-LLMs/tests/integration/defs/perf
chenfeiz0326 48c875f8ea
[None][fix] Add OpenSearch URL in slurm_launch.sh for Multinode Perf Sanity Test (#9990)
Signed-off-by: Chenfei Zhang <chenfeiz@nvidia.com>
2025-12-23 16:02:38 +08:00
..
disagg [None][chore] Fix GB300 support issues (#10196) 2025-12-23 10:42:41 +08:00
__init__.py Update (#2978) 2025-03-23 16:39:35 +08:00
allowed_configs.py [TRTLLM-8684][chore] Migrate BuildConfig to Pydantic, add a Python wrapper for KVCacheType enum (#8330) 2025-10-28 09:17:26 -07:00
base_perf_pytorch.csv [#8391][fix] check perf by device subtype (#8428) 2025-10-22 12:38:05 +03:00
base_perf.csv CI: Performance regression tests update (#3531) 2025-06-01 09:47:55 +03:00
build.py [TRTLLM-8684][chore] Migrate BuildConfig to Pydantic, add a Python wrapper for KVCacheType enum (#8330) 2025-10-28 09:17:26 -07:00
create_perf_comparison_report.py CI: Performance regression tests update (#3531) 2025-06-01 09:47:55 +03:00
data_export.py [#8391][fix] check perf by device subtype (#8428) 2025-10-22 12:38:05 +03:00
data.py Update (#2978) 2025-03-23 16:39:35 +08:00
diff_tools.py CI: Performance regression tests update (#3531) 2025-06-01 09:47:55 +03:00
gpu_clock_lock.py [https://nvbugs/5689658][test] Fix gpu lock issue running on cluster (#9441) 2025-11-28 13:59:22 +08:00
misc.py [#8391][fix] check perf by device subtype (#8428) 2025-10-22 12:38:05 +03:00
open_search_db_utils.py [None][fix] Add OpenSearch URL in slurm_launch.sh for Multinode Perf Sanity Test (#9990) 2025-12-23 16:02:38 +08:00
pytorch_model_config.py [https://nvbugs/5698434][test] Add Qwen3-4B-Eagle3 One-model perf test (#10041) 2025-12-17 13:37:25 +08:00
README_release_test.md [TRTC-102][docs] --extra_llm_api_options->--config in docs/examples/tests (#10005) 2025-12-19 13:48:43 -05:00
README.md CI: Performance regression tests update (#3531) 2025-06-01 09:47:55 +03:00
requirements.txt CI: Performance regression tests update (#3531) 2025-06-01 09:47:55 +03:00
sampler_options_config.py [https://nvbugs/5666804][test] only adding sampler config for limited models (#9512) 2025-12-07 19:40:29 -08:00
sanity_perf_check.py [#8391][fix] check perf by device subtype (#8428) 2025-10-22 12:38:05 +03:00
session_data_writer.py [Infra]Remove some old keyword (#4552) 2025-05-31 13:50:45 +08:00
test_gpu_clock_lock.py [#8391][chore] test_perf.py to lock clocks read from gpu_configs.yml instead of max freq (#9409) 2025-11-25 09:20:33 +02:00
test_perf.py [TRTC-102][docs] --extra_llm_api_options->--config in docs/examples/tests (#10005) 2025-12-19 13:48:43 -05:00
utils.py [TRTLLM-9181][feat] improve disagg-server prometheus metrics; synchronize workers' clocks when workers are dynamic (#9726) 2025-12-16 05:16:32 -08:00

Sanity Perf Check Introduction

Background

"Sanity perf check" is a mechanism to detect performance regressions in the L0 pipeline. The tests defined in l0_perf.yml are the ones that are required to pass for every PR before merge.

base_perf.csv

The baseline for performance benchmarking is defined at base_perf.csv - this file contains the metrics that we verify regression on between CI runs.

This file contains records in the following format:

perf_case_name metric_type	perf_metric	threshold absolute_threshold

To allow for some machine dependent variance in performance benchmarking we also define a threshold and an absolute_threshold. This ensures we do not fail on results that reside within legitimate variance thresholds.

threshold is relative.

CI

As part of our CI, the test_perf.py collects performance metrics for configurations defined in l0_perf.yml. This step outputs a perf_script_test_results.csv containing the metrics collected for all configurations.

After this step completes, the CI will run sanity_perf_check.py. This script will make sure that all differences in metrics from the run on this branch is within a designated threshold of the baseline (base_perf.csv).

There're 4 possible results for this:

  1. The current HEAD impact on the performance for our setups is within accepted threshold - the perf check will pass w/o exception.
  2. The current HEAD introduces a new setup/metric in l0_perf.yml or removes some of them. This will result in new metrics collected by test_perf.py which will fail sanity_perf_check.py. This requires an update for base_perf.csv.
  3. The current HEAD improves performance for at least one metric by more than the accepted threshold, which will fail sanity_perf_check.py. This requires an update for base_perf.csv
  4. The current HEAD introduces a regression to one of the metrics that is over the accepted threshold, which will fail sanity_perf_check.py. This will require to fix the current branch and rerun the pipeline.

Updating base_perf.csv

If a CI run fails sanity_perf_check.py, it will upload a patch file as an artifact. This file can be applied to current branch using git apply <patch_file>.

This patch will only update the metrics that had a difference which was over the accepted threshold. The patch will also remove/add metrics according to the removed or added tests.

Running locally

Given a target_perf_csv_path you can compare it to another perf csv file. First make sure you install the dependencies:

pip install -r tests/integration/defs/perf/requirements.txt

Then, you can run it with:

sanity_perf_check.py <target_perf_csv_path> <base_perf_csv_path>

** In the CI, <base_perf_csv_path> is the base_perf.csv file path mentioned above.

Running this print the diffs between both performance results. It presents only:

  1. Metrics that have a diff bigger than the accepted threshold.
  2. Metrics missing in base_perf_csv.
  3. Metrics missing in target_perf_csv.

If any diffs were found it will also generate a patch file to change base_perf_csv with the new metrics, it will be written to the same directory as <target_perf_csv_path> resides in.

Generating diff report

To view the difference between performance reports, it is possible to generate a pdf report containing Bar graphs comparing the perf metric value per-metric. Each metric will contain comparison bars per configuration.

For example: If we run the script with 3 files and test 2 configurations per metric, we will have 2 groups of 3 bars - A group per-configuration, each group containing the 3 performance metrics reported in the 3 files.

To generate this report:

python tests/integration/defs/perf/create_perf_comparison_report.py --output_path=<output_path_for_report> --files <csv file paths separated by spaces>

This will create a pdf file at <output_path_for_report>.