TensorRT-LLMs/tests/unittest/_torch
Necofish 73870ae4ad
[None][feat] support Qwen3-VL dense model in pytorch backend (#9060)
Signed-off-by: Nekofish-L <liuxiangyang@mail.ustc.edu.cn>
2025-12-31 17:54:26 +09:00
..
attention [TRTLLM-9798][feat] Change to use new DeepGEMM MQA sm100 kernel for MTP-3 (#10226) 2025-12-24 14:39:12 +08:00
auto_deploy [#9626][feat] Add an auto-deploy transform for using cutlass FP4 MoE kernels (#10304) 2025-12-29 23:18:15 +02:00
compilation [TRTLLM-3105][feat] Add Piecewise CUDA Graph Support (#3804) 2025-05-09 11:04:01 +08:00
debugger Fix: fix nvbug 5356427 (#5464) 2025-06-25 22:24:26 +08:00
executor [TRTLLM-5972][chore] Load balance decode token KV cache with helix parallelism (#9757) 2025-12-12 22:29:05 +08:00
misc [TRTLLM-9615][feat] Implement a distributed tuning system (#9621) 2025-12-15 21:08:53 +08:00
modeling [None][feat] support Qwen3-VL dense model in pytorch backend (#9060) 2025-12-31 17:54:26 +09:00
models/checkpoints/hf [TRTLLM-7136][feat] Update load_weights method to include mapping parameter in checkpoint loaders (#9583) 2025-12-05 16:07:20 +01:00
modules [TRTLLM-9831][perf] Enable 2CTA with autotune for CuteDSL MoE and Grouped GEMM optimizations (#10201) 2025-12-25 09:04:20 -05:00
multi_gpu [TRTLLM-10126][feat] Increase topk upper limit to 22 for NVLinkOneSid… (#10229) 2025-12-27 22:48:10 +08:00
multi_gpu_modeling [https://nvbugs/5515753][ci] Add NCCL_DEBUG=INFO flag to collect more info with CI failure. (#8440) 2025-11-20 12:43:13 -05:00
multimodal [TRTLLM-9409][feat] Pass MRoPE tensors for EPD disagg (#9758) 2025-12-22 06:32:49 -05:00
ray_orchestrator [TRTLLM-9737][chore] Add rl perf reproduce script and enhance the robustness of Ray tests (#9939) 2025-12-24 15:27:01 +08:00
sampler [None][fix] avoid implicit cudaStreamSynchronize in sample_async. (#10120) 2025-12-23 10:15:40 +08:00
speculative [https://nvbugs/5652062][fix] Rewind kv_cache and reset draft tokens (#10160) 2025-12-25 09:13:51 -05:00
thop [TRTLLM-9831][perf] Enable 2CTA with autotune for CuteDSL MoE and Grouped GEMM optimizations (#10201) 2025-12-25 09:04:20 -05:00
helpers.py [#8733][feat] Add Llama4 MoE handling to AutoDeploy (#9556) 2025-12-04 08:03:33 +02:00
pattern_watcher.py [TRTLLM-3105][feat] Add Piecewise CUDA Graph Support (#3804) 2025-05-09 11:04:01 +08:00
test_connector.py [None][feat] KV Cache Connector API (#7228) 2025-08-28 23:09:27 -04:00