TensorRT-LLMs/tests/unittest/_torch
Lucas Liebenwein b181568d6f
[TRTLLM-8201][feat] Nemotron H MoE Sharding (#8744)
Signed-off-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com>
Signed-off-by: greg-kwasniewski1 <213329731+greg-kwasniewski1@users.noreply.github.com>
Co-authored-by: greg-kwasniewski1 <213329731+greg-kwasniewski1@users.noreply.github.com>
Co-authored-by: Suyog Gupta <41447211+suyoggupta@users.noreply.github.com>
2025-11-05 12:35:29 -08:00
..
attention [TRTLLM-8768][chore] Fuse QK down_proj with indexer K + weight_proj for FP4 ckpt (#8771) 2025-11-05 07:57:09 -08:00
auto_deploy [TRTLLM-8201][feat] Nemotron H MoE Sharding (#8744) 2025-11-05 12:35:29 -08:00
compilation [TRTLLM-3105][feat] Add Piecewise CUDA Graph Support (#3804) 2025-05-09 11:04:01 +08:00
debugger Fix: fix nvbug 5356427 (#5464) 2025-06-25 22:24:26 +08:00
executor [TRTLLM-8511][feat] Add update_weights and sleep_wakeup support for rl integration (#8302) 2025-11-04 10:19:24 -08:00
misc [TRTLLM-8511][feat] Add update_weights and sleep_wakeup support for rl integration (#8302) 2025-11-04 10:19:24 -08:00
modeling [TRTLLM-8994][infra] upgrade to DLFW 25.10 and pytorch 2.9.0 / triton 3.5.0 (#8838) 2025-11-04 18:59:34 +08:00
models/checkpoints/hf [None][feat] Skip prefetching consolidated safetensors when appropriate (#7013) 2025-08-25 23:56:21 -04:00
modules [https://nvbugs/5565565] [fix] Remove waiver (#8450) 2025-11-04 16:42:31 +08:00
multi_gpu [None][infra] Waive failed cases on main 11/05 (#8936) 2025-11-04 22:54:45 -08:00
multi_gpu_modeling [https://nvbugs/5536131][fix] Fix illegal access issue when scale is not provided in Llama3/4. (#7960) 2025-10-16 22:46:19 +08:00
multimodal [None][fix] InputProcessor config naming convention fix (#8705) 2025-11-03 22:29:21 -08:00
ray_orchestrator [TRTLLM-8511][feat] Add update_weights and sleep_wakeup support for rl integration (#8302) 2025-11-04 10:19:24 -08:00
sampler [https://nvbugs/5593199][test] Enhance beam search tests deterministic dummy model (#8625) 2025-10-29 06:12:22 +01:00
speculative [https://nvbugs/5534574][fix] disable spec decoding forever once the request spec decoding is disabled (#8446) 2025-10-29 19:28:43 +08:00
thop [TRTLLM-8994][infra] upgrade to DLFW 25.10 and pytorch 2.9.0 / triton 3.5.0 (#8838) 2025-11-04 18:59:34 +08:00
helpers.py [TRTLLM-8836][chore] Create ModelEngine from LlmArgs (#8600) 2025-11-01 05:26:06 -07:00
pattern_watcher.py [TRTLLM-3105][feat] Add Piecewise CUDA Graph Support (#3804) 2025-05-09 11:04:01 +08:00
test_connector.py [None][feat] KV Cache Connector API (#7228) 2025-08-28 23:09:27 -04:00