TensorRT-LLMs/tensorrt_llm/_torch/compilation
Jin Li 3454eacd74 [https://nvbugs/5546510][fix] Move torch.cuda.Stream out of torch com… (#8494)
Signed-off-by: Jin Li <59594262+liji-nv@users.noreply.github.com>
Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com>
Signed-off-by: Mike Iovine <miovine@nvidia.com>
2025-11-20 12:43:13 -05:00
..
multi_stream
patterns feat: Add non UB AR + Residual + Norm + Quant fusion (#6320) 2025-07-24 05:51:43 -04:00
__init__.py
backend.py [https://nvbugs/5546510][fix] Move torch.cuda.Stream out of torch com… (#8494) 2025-11-20 12:43:13 -05:00
piecewise_optimizer.py [https://nvbugs/5550409][fix] Disable torch compile in piecewise attention part to Avoid host overhead (#8708) 2025-10-29 18:12:58 +08:00
recover_pass.py
remove_copy_pass.py
utils.py [TRTLLM-7027][feat] Fuse d2t to logitsBitmaskKernel and fix a race condition in one-model spec (#7481) 2025-09-04 23:30:14 +08:00