TensorRT-LLMs/tests/unittest/_torch
mpikulski e5f39ec7cf
[TRTLLM-9488][feat] add 'disable_flashinfer_sampling' config option (#9454)
Signed-off-by: ixlmar <206748156+ixlmar@users.noreply.github.com>
2025-11-28 13:00:39 +01:00
..
attention [None][feat] Support MLA chunked prefill for DeepSeek V3.2 model (#9376) 2025-11-26 16:38:25 +08:00
auto_deploy [None][feat] AutoDeploy: Add A_log fusion for Mamba layers (#9422) 2025-11-26 14:39:20 -08:00
compilation [TRTLLM-3105][feat] Add Piecewise CUDA Graph Support (#3804) 2025-05-09 11:04:01 +08:00
debugger Fix: fix nvbug 5356427 (#5464) 2025-06-25 22:24:26 +08:00
executor [TRTLLM-8650][fix] beam search request validation (#8433) (#9228) 2025-11-21 04:08:45 -08:00
misc [None][fix] Fix on-disk cache and revise logger/statistics for AutoTuner. (#9211) 2025-11-28 13:32:21 +08:00
modeling [TRTLLM-9264][fix] Add accuracy/unit tests/doc for phi4mm (#9246) 2025-11-26 11:12:35 +08:00
models/checkpoints/hf [None][feat] Skip prefetching consolidated safetensors when appropriate (#7013) 2025-08-25 23:56:21 -04:00
modules [https://nvbugs/5637037][chore] Update waive lists. (#9386) 2025-11-28 10:45:22 +08:00
multi_gpu [https://nvbugs/5515753][ci] Add NCCL_DEBUG=INFO flag to collect more info with CI failure. (#8440) 2025-11-20 12:43:13 -05:00
multi_gpu_modeling [https://nvbugs/5515753][ci] Add NCCL_DEBUG=INFO flag to collect more info with CI failure. (#8440) 2025-11-20 12:43:13 -05:00
multimodal [None][fix] InputProcessor config naming convention fix (#8705) 2025-11-03 22:29:21 -08:00
ray_orchestrator [TRTLLM-9293][feat] Enable partial weight loading to support streaming update weights (#9224) 2025-11-26 10:59:06 +08:00
sampler [TRTLLM-9488][feat] add 'disable_flashinfer_sampling' config option (#9454) 2025-11-28 13:00:39 +01:00
speculative [None][ci] Waive blackwell test on spec gate. (#9502) 2025-11-27 07:19:58 +08:00
thop [TRTLLM-8160][feat] Add draft token tree runtime on CDL (#8586) 2025-11-25 09:40:55 -05:00
helpers.py [TRTLLM-8521][chore] remove circular dependency between model engine and cuda graph runner (#7572) 2025-11-11 10:13:45 -08:00
pattern_watcher.py [TRTLLM-3105][feat] Add Piecewise CUDA Graph Support (#3804) 2025-05-09 11:04:01 +08:00
test_connector.py [None][feat] KV Cache Connector API (#7228) 2025-08-28 23:09:27 -04:00