TensorRT-LLMs/tensorrt_llm/_torch/compilation
Enwei Zhu 1745102e72
[TRTLLM-7027][feat] Fuse d2t to logitsBitmaskKernel and fix a race condition in one-model spec (#7481)
Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com>
Co-authored-by: Jin Li <59594262+liji-nv@users.noreply.github.com>
2025-09-04 23:30:14 +08:00
..
multi_stream [TRTLLM-4279] feat: Multistream initial support for torch compile flow (#5847) 2025-07-21 19:10:22 +08:00
patterns feat: Add non UB AR + Residual + Norm + Quant fusion (#6320) 2025-07-24 05:51:43 -04:00
__init__.py Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
backend.py [TRTLLM-6633][feat] Padding for piecewise cudagraph (#6750) 2025-08-26 18:31:33 -04:00
piecewise_optimizer.py [TRTLLM-6633][feat] Padding for piecewise cudagraph (#6750) 2025-08-26 18:31:33 -04:00
recover_pass.py [TRTLLM-3105][feat] Add Piecewise CUDA Graph Support (#3804) 2025-05-09 11:04:01 +08:00
remove_copy_pass.py [TRTLLM-4279] feat: Multistream initial support for torch compile flow (#5847) 2025-07-21 19:10:22 +08:00
utils.py [TRTLLM-7027][feat] Fuse d2t to logitsBitmaskKernel and fix a race condition in one-model spec (#7481) 2025-09-04 23:30:14 +08:00