TensorRT-LLMs/tensorrt_llm/_torch/custom_ops
Dom Brown 2aacdba1e4
[TRTLLM-6100] fix: Nvbug 5356427: autotuned TRTLLM Gen fp8 block scale MoE illegal memory access (#5676)
Signed-off-by: Dom Brown <3886319+DomBrown@users.noreply.github.com>
2025-07-04 10:38:08 +08:00
..
__init__.py [TRTLLM-5770] feat: Integrate TRT-LLM Gen FP8 block scale MoE with Pytorch workflow kernel autotuner (#5207) 2025-06-17 21:01:56 +08:00
cpp_custom_ops.py Feat/ds r1 min latency opt round3, add router gemm, fused a gemm, PDL (#4560) 2025-06-14 17:36:22 +08:00
flashinfer_custom_ops.py Cherry pick feat/llama4 to main (#4739) 2025-05-30 05:28:40 +08:00
torch_custom_ops.py [TRTLLM-5589] feat: Minor optimizations for tunable FP8 batched GEMM op. (#5139) 2025-06-18 14:33:46 +08:00
trtllm_gen_custom_ops.py [TRTLLM-6100] fix: Nvbug 5356427: autotuned TRTLLM Gen fp8 block scale MoE illegal memory access (#5676) 2025-07-04 10:38:08 +08:00
userbuffers_custom_ops.py feat: Introduce UB allocator for pytorch flow (#3257) 2025-04-08 18:39:49 +08:00