TensorRT-LLMs/tensorrt_llm/_torch/compilation
Rundong Li f1b85fea4c
[None][feat] Integrate cuda.tile RMS norm kernels (#9725)
Signed-off-by: Rundong (David) Li <davidli@nvidia.com>
Co-authored-by: Jinman Xie <jinmanx@nvidia.com>
Co-authored-by: Alexey Bylinkin <abylinkin@nvidia.com>
Co-authored-by: Qiqi Xiao <qiqix@nvidia.com>
Co-authored-by: Biao Wang <biaow@nvidia.com>
Co-authored-by: Thomas Schmid <thschmid@nvidia.com>
2026-02-02 19:44:27 +08:00
..
multi_stream [TRTLLM-7073][feat] Support torch compile for PP for Llama and DeepSeekV3 (#7838) 2025-12-04 13:32:11 +08:00
patterns [TRTLLM-8821][feat] Apply AutoTuner to AllReduce Op for strategy tuning. (#8531) 2026-01-05 15:44:37 +08:00
__init__.py Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
backend.py [TRTLLM-7073][feat] Support torch compile for PP for Llama and DeepSeekV3 (#7838) 2025-12-04 13:32:11 +08:00
piecewise_optimizer.py [TRTLLM-7073][feat] Support torch compile for PP for Llama and DeepSeekV3 (#7838) 2025-12-04 13:32:11 +08:00
recover_pass.py [TRTLLM-3105][feat] Add Piecewise CUDA Graph Support (#3804) 2025-05-09 11:04:01 +08:00
remove_copy_pass.py [TRTLLM-4279] feat: Multistream initial support for torch compile flow (#5847) 2025-07-21 19:10:22 +08:00
utils.py [None][feat] Integrate cuda.tile RMS norm kernels (#9725) 2026-02-02 19:44:27 +08:00