TensorRT-LLMs/tests/unittest/_torch
benzh-2025 6df2c8a074
[None][feat] add fp4 gemm + allreduce (#9729)
Signed-off-by: benzh 
Signed-off-by: benzh-2025
2026-01-13 21:11:13 +08:00
..
attention [TRTLLM-9798][feat] Change to use new DeepGEMM MQA sm100 kernel for MTP-3 (#10226) 2025-12-24 14:39:12 +08:00
auto_deploy [https://nvbugs/5548861][fix] AutoDeploy: Fix the test (#10521) 2026-01-09 13:30:24 -08:00
compilation [TRTLLM-3105][feat] Add Piecewise CUDA Graph Support (#3804) 2025-05-09 11:04:01 +08:00
debugger Fix: fix nvbug 5356427 (#5464) 2025-06-25 22:24:26 +08:00
distributed [TRTLLM-9467][fix] Fix PP+CP combination with helix parallelism (#10312) 2026-01-01 13:42:53 -05:00
executor [https://nvbugs/5717993][fix] Add execution_stream across PyExecutor, KVCacheManager, PeftCacheManager to ensure proper CUDA stream synchronization between KV cache transfer operations and model forward kernels. (#10060) 2025-12-31 09:22:54 -08:00
misc [None][perf] TRTLLM MoE maps to lower tuning buckets when ep>1 (#9998) 2026-01-05 17:16:12 +01:00
modeling [TRTLLM-10195][feat] K-EXAONE support (#10355) 2026-01-12 00:29:51 +09:00
models/checkpoints/hf [TRTLLM-7136][feat] Update load_weights method to include mapping parameter in checkpoint loaders (#9583) 2025-12-05 16:07:20 +01:00
modules [https://nvbugs/5784543][fix] Setup dist before using autotuner. (#10491) 2026-01-08 10:32:50 +08:00
multi_gpu [None][feat] add fp4 gemm + allreduce (#9729) 2026-01-13 21:11:13 +08:00
multi_gpu_modeling [https://nvbugs/5515753][ci] Add NCCL_DEBUG=INFO flag to collect more info with CI failure. (#8440) 2025-11-20 12:43:13 -05:00
multimodal [TRTLLM-9522][test] cover LLM API multi_modal_embeddings (#9963) 2026-01-12 11:38:22 +01:00
ray_orchestrator [TRTLLM-9467][fix] Fix PP+CP combination with helix parallelism (#10312) 2026-01-01 13:42:53 -05:00
sampler [None][fix] avoid implicit cudaStreamSynchronize in sample_async. (#10120) 2025-12-23 10:15:40 +08:00
speculative [https://nvbugs/5749988][fix] Remove redundant qwen3 spec dec test (#10387) 2026-01-06 11:46:34 -05:00
thop [None][feat] CuteDSL MOE FC1 Enhancement (#10088) 2026-01-06 09:30:43 +08:00
helpers.py [#8733][feat] Add Llama4 MoE handling to AutoDeploy (#9556) 2025-12-04 08:03:33 +02:00
pattern_watcher.py [TRTLLM-3105][feat] Add Piecewise CUDA Graph Support (#3804) 2025-05-09 11:04:01 +08:00
test_connector.py [None][feat] KV Cache Connector API (#7228) 2025-08-28 23:09:27 -04:00
test_model_config.py [TRTLLM-10171][fix] Correct attention handling in ModelConfig and KVCacheManager (#10330) 2026-01-04 06:07:30 -05:00