TensorRT-LLMs/tests/unittest/_torch
Chenghao Zhang ddf2d010e2
[TRTLLM-8814][feat] AutoDeploy: Use TRTLLM kernels for FP8 linear (#8820)
Signed-off-by: Chenghao Zhang <211069071+nvchenghaoz@users.noreply.github.com>
Signed-off-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com>
Signed-off-by: nvchenghaoz <211069071+nvchenghaoz@users.noreply.github.com>
Co-authored-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com>
2025-11-06 11:00:10 -08:00
..
attention [TRTLLM-8803][feat] Add rope and uk-bgemm overlap for mla generation (#8495) 2025-11-06 17:39:57 +08:00
auto_deploy [TRTLLM-8814][feat] AutoDeploy: Use TRTLLM kernels for FP8 linear (#8820) 2025-11-06 11:00:10 -08:00
compilation [TRTLLM-3105][feat] Add Piecewise CUDA Graph Support (#3804) 2025-05-09 11:04:01 +08:00
debugger Fix: fix nvbug 5356427 (#5464) 2025-06-25 22:24:26 +08:00
executor [TRTLLM-8511][feat] Add update_weights and sleep_wakeup support for rl integration (#8302) 2025-11-04 10:19:24 -08:00
misc [TRTLLM-8511][feat] Add update_weights and sleep_wakeup support for rl integration (#8302) 2025-11-04 10:19:24 -08:00
modeling [TRTLLM-8994][infra] upgrade to DLFW 25.10 and pytorch 2.9.0 / triton 3.5.0 (#8838) 2025-11-04 18:59:34 +08:00
models/checkpoints/hf [None][feat] Skip prefetching consolidated safetensors when appropriate (#7013) 2025-08-25 23:56:21 -04:00
modules [https://nvbugs/5565565] [fix] Remove waiver (#8450) 2025-11-04 16:42:31 +08:00
multi_gpu [None][infra] Waive failed cases on main 11/05 (#8936) 2025-11-04 22:54:45 -08:00
multi_gpu_modeling [https://nvbugs/5536131][fix] Fix illegal access issue when scale is not provided in Llama3/4. (#7960) 2025-10-16 22:46:19 +08:00
multimodal [None][fix] InputProcessor config naming convention fix (#8705) 2025-11-03 22:29:21 -08:00
ray_orchestrator [None][chore] Use cached model in all ray tests (#8962) 2025-11-06 15:14:15 +01:00
sampler [https://nvbugs/5593199][test] Enhance beam search tests deterministic dummy model (#8625) 2025-10-29 06:12:22 +01:00
speculative [https://nvbugs/5498478][fix] Fix eagle3 fp8 kv target model + bf16 draft model + chunked prefill (#8910) 2025-11-06 07:41:21 -08:00
thop [TRTLLM-8994][infra] upgrade to DLFW 25.10 and pytorch 2.9.0 / triton 3.5.0 (#8838) 2025-11-04 18:59:34 +08:00
helpers.py [TRTLLM-8836][chore] Create ModelEngine from LlmArgs (#8600) 2025-11-01 05:26:06 -07:00
pattern_watcher.py [TRTLLM-3105][feat] Add Piecewise CUDA Graph Support (#3804) 2025-05-09 11:04:01 +08:00
test_connector.py [None][feat] KV Cache Connector API (#7228) 2025-08-28 23:09:27 -04:00