TensorRT-LLMs/tensorrt_llm/_torch/custom_ops
2025-07-02 04:54:43 -04:00
..
__init__.py [TRTLLM-5770] feat: Integrate TRT-LLM Gen FP8 block scale MoE with Pytorch workflow kernel autotuner (#5207) 2025-06-17 21:01:56 +08:00
cpp_custom_ops.py [feat] Support torch compile for attention dp (#5086) 2025-07-01 13:48:52 -04:00
flashinfer_custom_ops.py Cherry pick feat/llama4 to main (#4739) 2025-05-30 05:28:40 +08:00
torch_custom_ops.py [https://nvbugspro.nvidia.com/bug/5329655] [feat] Pytorch path add spec dec param to attention op (#5146) 2025-07-02 04:54:43 -04:00
trtllm_gen_custom_ops.py [5356427] fix: Remove the seq_len of 4096 from FP8 block scale MoE tuning configs. (#5485) 2025-06-26 08:38:35 +08:00
userbuffers_custom_ops.py feat: Introduce UB allocator for pytorch flow (#3257) 2025-04-08 18:39:49 +08:00