TensorRT-LLMs/tensorrt_llm/_torch/compilation
Shijie dcf5c86720
[None][feat] Unify nvfp4 gemm backend (#8963)
Signed-off-by: Shijie Wang <jaywan@nvidia.com>
Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com>
Signed-off-by: Shijie <jaywan@nvidia.com>
Co-authored-by: Yukun He <23156053+hyukn@users.noreply.github.com>
2025-12-02 11:03:51 +08:00
..
multi_stream [TRTLLM-4279] feat: Multistream initial support for torch compile flow (#5847) 2025-07-21 19:10:22 +08:00
patterns [None][feat] Unify nvfp4 gemm backend (#8963) 2025-12-02 11:03:51 +08:00
__init__.py Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
backend.py [https://nvbugs/5546510][fix] Move torch.cuda.Stream out of torch com… (#8494) 2025-11-20 12:43:13 -05:00
piecewise_optimizer.py [https://nvbugs/5550409][fix] Disable torch compile in piecewise attention part to Avoid host overhead (#8708) 2025-10-29 18:12:58 +08:00
recover_pass.py [TRTLLM-3105][feat] Add Piecewise CUDA Graph Support (#3804) 2025-05-09 11:04:01 +08:00
remove_copy_pass.py [TRTLLM-4279] feat: Multistream initial support for torch compile flow (#5847) 2025-07-21 19:10:22 +08:00
utils.py [TRTLLM-7027][feat] Fuse d2t to logitsBitmaskKernel and fix a race condition in one-model spec (#7481) 2025-09-04 23:30:14 +08:00