TensorRT-LLMs/tensorrt_llm/_torch/custom_ops
Matthias Jouanneaux d0f107e4dd
[TRTLLM-5966][feat] Helix: add full MLA support for Helix (#8104)
Signed-off-by: Matthias Jouanneaux <mjoux@nvidia.com>
2025-11-04 09:06:58 +08:00
..
__init__.py [None][feat] Support Qwen3 next (#7892) 2025-09-29 21:16:07 +08:00
cpp_custom_ops.py [TRTLLM-5966][feat] Helix: add full MLA support for Helix (#8104) 2025-11-04 09:06:58 +08:00
cute_dsl_custom_ops.py [None][fix] Fix cute dsl nvfp4 gemm autotune issue (#8761) 2025-11-03 22:55:45 +08:00
flashinfer_custom_ops.py [None][feat] Support Qwen3 next (#7892) 2025-09-29 21:16:07 +08:00
torch_custom_ops.py [TRTLLM-7318][feat] MnnvlThroughput AlltoAll implementation. (#7499) 2025-10-27 13:23:06 -04:00
trtllm_gen_custom_ops.py [None][feat] Autotuner can iterate through all tactics for test purposes (#8663) 2025-10-30 13:11:25 +01:00
userbuffers_custom_ops.py feat: Introduce UB allocator for pytorch flow (#3257) 2025-04-08 18:39:49 +08:00