TensorRT-LLMs/tensorrt_llm/_torch/compilation
Yi Zhang a69bd2a6fa
[https://nvbugs/5550409][fix] Disable torch compile in piecewise attention part to Avoid host overhead (#8708)
Signed-off-by: yizhang-nv <187001205+yizhang-nv@users.noreply.github.com>
2025-10-29 18:12:58 +08:00
..
multi_stream [TRTLLM-4279] feat: Multistream initial support for torch compile flow (#5847) 2025-07-21 19:10:22 +08:00
patterns feat: Add non UB AR + Residual + Norm + Quant fusion (#6320) 2025-07-24 05:51:43 -04:00
__init__.py Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
backend.py [TRTLLM-6633][feat] Padding for piecewise cudagraph (#6750) 2025-08-26 18:31:33 -04:00
piecewise_optimizer.py [https://nvbugs/5550409][fix] Disable torch compile in piecewise attention part to Avoid host overhead (#8708) 2025-10-29 18:12:58 +08:00
recover_pass.py [TRTLLM-3105][feat] Add Piecewise CUDA Graph Support (#3804) 2025-05-09 11:04:01 +08:00
remove_copy_pass.py [TRTLLM-4279] feat: Multistream initial support for torch compile flow (#5847) 2025-07-21 19:10:22 +08:00
utils.py [TRTLLM-7027][feat] Fuse d2t to logitsBitmaskKernel and fix a race condition in one-model spec (#7481) 2025-09-04 23:30:14 +08:00