TensorRT-LLMs/tests/integration/defs/accuracy
Dom Brown 8709fe8b53
chore: bump version to 0.19.0 (#3598) (#3841)
test: add test cases for 0.19 release (#3608)

* fix test name



* add quickstart test for nemotron-ultra



* add rcca multi-node test case for deepseek-v3



* add rcca info



---------




squash (#3642)



fix: nvbugs/5187237: fix deterministic mode crash (#3448)

* nvbugs/5187237 nvbugs/5112075: fix deterministic mode error

* remove waive


* Revert "remove waive"

This reverts commit 0bf5486d19906d692bfb7a6262333c296b0087ac.



* revert ar fusion



---------



update fp8 doc (#3647)




tests: change qa perf test to trtllm-bench (#3619)




 fix: FP8 quantized lm_head (NvBug 5214229) (#3567)



infra: Add PR approval protection for the release branch (#3634)



fix: nvbugs/5231298: pytorch allreduce issue (#3673)



Fix: nvbugs/5222698 variable not defined (#3630)

* Fix: nvbugs/5222698 variable not defined



* Tidy code



---------



test:sync waives.txt from main branch by disabling test_perf/gpt_350m-cppmanager case (#3685)



test:restore fp8 kv cache testing for L0 (#3671)



doc: Update DeepSeek perf docs (#3693)

* Update DeepSeek perf docs



* update



* Apply suggestions from code review




---------




tests: waive test_llm_multi_node (#3664)



fix: update test_user_buffers_mm_add_prologue atol (#3711)



Fix: cherry-pick hmac encryption from main branch (#3635)

* security fix cherry-pick changes from main



* fix hmac in remote mpi session (#3649)



---------





Un-waive DS-V3-Lite tests. (#3621)



fix: FP8 kv accuracy (#3675)

* fix FP8 kv accuracy



* update doc



---------



Fix script options for engines. (#3622)



unwaive multi-node test (#3721)



chore : Split more tests out of gpt tests (#3524) (#3674)



doc:add torch examples link into torch backend documentation (#3749)




test: Get Eagle tests working (#3593) (#3722)




Waive L0 test (#3756)



waive failed case in perf test, change default max_batch_size to 512 and write config.json to output log (#3656)





Update ds v3 parameters in stress test. (#3676)

waive gemma on L20 (#3766)



https://nvbugs/5141291: Fix convert.py script for Qwen model. (#3758)

Include Qwen2VLDecoderLayer in the smooth_qwen2_model function.



fix: PP4 fixes and cleanup (#3688)




remove benchmark test list (#3643)



skip disagg deepseek test if sm!=90 (#3720)



test: skip failed cases on B200 (#3710)

* add skip condition to tests



* fix error



---------



test: [nvbug: 5234494] skip_pre_ada for fp8 cases (#3718)

* skip_pre_ada for fp8 cases



* update



* update after rebase



---------



add know issue to deepseek doc. (#3800)



Fix ModelOpt Mixtral AWQ OOM (#3714) (#3761)




Waive L0 tests (#3826)



fix: Reduce memory usage in fused moe op associated with AutoTuning and fix moe fallback issue. (#3793)

* Reduce memory usage in fused moe op associated with AutoTuning.
* Replace pre-defined bucket size strategy with a generating function based on the tune_max_num_tokens.
* Add free_memory logic of workspace in min_latency_mode fused moe path.



* Fix fused_moe fallback issue. (#3652)

min_latency_mode is only set to False during warmup phase. Thus when it becomes true during inference, all tactics fall back to the default one and thus cause perf regression.



---------



[doc] Better document for Draft-Target-Model (DTM) speculative decoding (#3797)




Fix pre-commit



Fix again



Address some review comments for the MI

Signed-off-by: Dom Brown <3886319+DomBrown@users.noreply.github.com>
Co-authored-by: Zhanrui Sun <184402041+ZhanruiSunCh@users.noreply.github.com>
2025-04-29 16:57:22 +08:00
..
media test: Accuracy test improvement (Part 3.1): Extend accuracy test suite with LLM API and initial implementation of trtllm-eval (#3167) 2025-04-01 22:20:29 +08:00
references test: add deepseek v3 & r1 cases (#3528) 2025-04-28 23:37:26 +08:00
scripts test [TRTLLM-4477,TRTLLM-4481]: Accuracy test improvement (Part 3.5): Support GSM8K and GPQA (#3483) 2025-04-22 07:38:16 +08:00
__init__.py Update (#2978) 2025-03-23 16:39:35 +08:00
accuracy_core.py [https://nvbugspro.nvidia.com/bug/5238599][fix] Normalize example path in accuracy tests 2025-04-24 10:09:59 +08:00
README.md test: Accuracy test improvement (Part 3.1): Extend accuracy test suite with LLM API and initial implementation of trtllm-eval (#3167) 2025-04-01 22:20:29 +08:00
test_cli_flow.py chore: bump version to 0.19.0 (#3598) (#3841) 2025-04-29 16:57:22 +08:00
test_llm_api_pytorch.py chore: bump version to 0.19.0 (#3598) (#3841) 2025-04-29 16:57:22 +08:00
test_llm_api.py chore: bump version to 0.19.0 (#3598) (#3841) 2025-04-29 16:57:22 +08:00

Accuracy Evaluation

Hypothesis testing methodology

Null hypothesis and alternative hypothesis

For a given dataset and model, the evaluated scores can be viewed as a population with mean \mu and variance \sigma. Note that the distribution is not necessarily to be a normal distribution.

When we finish implementing a model, we need to setup an accuracy reference. By evaluating the model on a subset of n samples, we practically draw n scores x_1, x_2, \dots, x_n from the population, and thus we can compute and record the sample average \bar{x} = \frac{1}{n} \sum_{i} x_i.

When testing if there is an accuracy regression, we once again evaluate the model on n samples, resulting in x'_1, x'_2, \dots, x'_n, and also sample average \bar{x'} = \frac{1}{n} \sum_{i} x'_i. The question is that, are these n samples drawn from the same distribution to the referenced one? This can be formulated as a hypothesis testing problem:

  • Null Hypothesis (H_0): x'_1, x'_2, \dots, x'_n are drawn from the same distribution to the reference.
  • Alternative Hypothesis (H_1): x'_1, x'_2, \dots, x'_n are from a different distribution from the reference.

Since we care about accuracy regression only, so it should be a one-tailed hypothesis testing problem:

  • Null Hypothesis (H_0): x'_1, x'_2, \dots, x'_n are drawn from a distribution with a mean equal to or higher than the reference.
  • Alternative Hypothesis (H_1): x'_1, x'_2, \dots, x'_n are drawn from a distribution with a mean lower than the reference.

Hypothesis Testing

Two-sample t-test

According to the two-sample t-test method, we can compute the t-statistic t = \frac{\bar{x'} - \bar{x}}{\sqrt{2 \sigma^2 / n}}. According to the Central Limit Theorem (CLT), the t-statistic is from a distribution that converges to the standard normal distribution \mathcal{N} (0, 1).

Given the threshold \gamma, the false positive (type I error) rate \alpha can be formulated as:


\begin{equation*}
\begin{aligned}
\alpha &= P \left(\bar{x'} \leq \gamma \mid t \sim \mathcal{N} (0, 1) \right) \\
&= P \left(t \leq \frac{\gamma - \bar{x}}{\sqrt{2 \sigma^2 / n}} \mid t \sim \mathcal{N} (0, 1) \right).
\end{aligned}
\end{equation*}

In practive, we setup a \alpha (e.g., 0.05) and then compute the threshold \gamma:


\begin{equation*}
\gamma = \Phi^{-1} (\alpha) \cdot \sqrt{2 \sigma^2 / n} + \bar{x}.
\end{equation*}

Note that \alpha is typically smaller than 0.5, so \gamma < \bar{x}.

Given the minimum detectable effect \theta, the false negative (type II error) rate \beta can be formulated as:


\begin{equation*}
\begin{aligned}
\beta &= P \left(\bar{x'} > \gamma \mid t \sim \mathcal{N} (-\frac{\theta}{\sqrt{2 \sigma^2 / n}}, 1) \right) \\
&= P \left(t > \frac{\gamma - \bar{x}}{\sqrt{2 \sigma^2 / n}} \mid t \sim \mathcal{N} (-\frac{\theta}{\sqrt{2 \sigma^2 / n}}, 1) \right) \\
&= P \left(t + \frac{\theta}{\sqrt{2 \sigma^2 / n}} > \frac{\gamma - \bar{x} + \theta}{\sqrt{2 \sigma^2 / n}} \mid t + \frac{\theta}{\sqrt{2 \sigma^2 / n}} \sim \mathcal{N} (0, 1) \right) \\
&= P \left(t + \frac{\theta}{\sqrt{2 \sigma^2 / n}} > \Phi^{-1} (\alpha) + \frac{\theta}{\sqrt{2 \sigma^2 / n}} \mid t + \frac{\theta}{\sqrt{2 \sigma^2 / n}} \sim \mathcal{N} (0, 1) \right)
\end{aligned}
\end{equation*}

In practice, we setup a \beta (e.g., 0.2) and then compute \theta:


\begin{equation*}
\begin{aligned}
\theta &= (\Phi^{-1} (1-\beta) - \Phi^{-1} (\alpha)) \cdot \sqrt{2 \sigma^2 / n} \\
&= - (\Phi^{-1} (\alpha) + \Phi^{-1} (\beta)) \cdot \sqrt{2 \sigma^2 / n}
\end{aligned}
\end{equation*}

Note that \alpha and \beta are typical smaller than 0.5, so \theta > 0.

References:

Steps to add accuracy tests

  • Estimate \sigma from the full dataset.
  • Decide a target minimum detectable effect \theta based on the nature of dataset and corresponding accuracy metric.
  • Decide \alpha and \beta based on the importance of model.
  • Iterate sample volume n from small to large, and compute \theta until it satisfies (is equal to or lower than) the target \theta.
  • Evaluate the model on the subset of sample volume n, resulting in the reference accuracy.
  • The threshold \gamma is automatically setup based on \alpha, \sigma, n and the reference accuracy.