TensorRT-LLMs/tests/unittest/_torch
gramnarayan 88b0fbc8ff
[#8245][feat] Autodeploy: Guided Decoding Support (#8551)
Signed-off-by: William Zhang <133824995+2ez4bz@users.noreply.github.com>
Signed-off-by: Govind Ramnarayan <105831528+govind-ramnarayan@users.noreply.github.com>
Signed-off-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com>
Co-authored-by: William Zhang <133824995+2ez4bz@users.noreply.github.com>
Co-authored-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com>
2025-10-28 09:29:57 +08:00
..
attention [TRTLLM-8535][feat] Support DeepSeek V3.2 with FP8 + BF16 KV cache/NVFP4 + BF16 KV cache (#8405) 2025-10-24 13:40:41 -04:00
auto_deploy [#8245][feat] Autodeploy: Guided Decoding Support (#8551) 2025-10-28 09:29:57 +08:00
compilation [TRTLLM-3105][feat] Add Piecewise CUDA Graph Support (#3804) 2025-05-09 11:04:01 +08:00
debugger Fix: fix nvbug 5356427 (#5464) 2025-06-25 22:24:26 +08:00
executor [TRTLLM-8754][chore] Refine PyTorchModelEngine with llm args (#8493) 2025-10-22 20:03:18 -04:00
misc [TRTLLM-4501][feat] Add input tensor pre-hook function API for the tuning process. (#6924) 2025-10-15 21:18:11 +08:00
modeling [https://nvbugs/5608723][fix] Use local data on multimodal tests and unwaive tests (#8673) 2025-10-28 09:20:02 +09:00
models/checkpoints/hf [None][feat] Skip prefetching consolidated safetensors when appropriate (#7013) 2025-08-25 23:56:21 -04:00
modules [None] [test] Add MNNVL AlltoAll tests to pre-merge (#8601) 2025-10-27 21:39:44 +08:00
multi_gpu [TRTLLM-7318][feat] MnnvlThroughput AlltoAll implementation. (#7499) 2025-10-27 13:23:06 -04:00
multi_gpu_modeling [https://nvbugs/5536131][fix] Fix illegal access issue when scale is not provided in Llama3/4. (#7960) 2025-10-16 22:46:19 +08:00
multimodal [https://nvbugs/5608723][fix] Use local data on multimodal tests and unwaive tests (#8673) 2025-10-28 09:20:02 +09:00
ray_orchestrator [TRTLLM-8513][feat] Add back worker extension (#8482) 2025-10-24 20:30:28 -04:00
sampler [TRTLLM-8832][feat] fully async _select_generated_logits with tests (#8628) 2025-10-27 16:15:32 +01:00
speculative [TRTLLM-8160][feat] Add max_total_draft_tokens (#8366) 2025-10-21 11:11:04 -04:00
thop [None][feat] Add FP8 rowwise GEMMs for B200 (#8332) 2025-10-27 16:33:14 -04:00
helpers.py [TRTLLM-7330][feat] Eagle3 cuda graph support for the first draft model inference (#7363) 2025-09-26 11:28:05 +08:00
pattern_watcher.py [TRTLLM-3105][feat] Add Piecewise CUDA Graph Support (#3804) 2025-05-09 11:04:01 +08:00
test_connector.py [None][feat] KV Cache Connector API (#7228) 2025-08-28 23:09:27 -04:00