TensorRT-LLMs/cpp/tensorrt_llm
Dom Brown 3e3b1769ad
[TRTLLM-5881] feat: Integrate TRT-LLM Gen FP4 block scale MoE with Pytorch workflow kernel autotuner (#5764)
Signed-off-by: Dom Brown <3886319+DomBrown@users.noreply.github.com>
2025-07-09 08:21:58 +01:00
..
batch_manager fix: Disaggregate serving with attention DP (#4993) 2025-07-08 16:15:03 +08:00
common [TRTLLM-5366][feat]Add support for sm121 (#5524) 2025-07-08 14:27:00 -07:00
cutlass_extensions/include/cutlass_extensions Fix GEMM+AR fusion on blackwell (#5563) 2025-07-09 08:48:47 +08:00
deep_ep Fix a quote error introduced in #5534 (#5816) 2025-07-08 18:48:32 +08:00
executor feat: KV events for sliding window attention (#5580) 2025-07-05 06:05:20 +08:00
executor_worker Update TensorRT-LLM (#2792) 2025-02-18 21:27:39 +08:00
kernels Add is_fp8_output key to XQA kernel cubin hashing (solves Eagle3-one-engine Hopper fp8 bug) (#5813) 2025-07-09 09:26:27 +08:00
layers Feat: Variable-Beam-Width-Search (VBWS) part4 (#3979) 2025-05-12 22:32:29 +02:00
plugins Fix GEMM+AR fusion on blackwell (#5563) 2025-07-09 08:48:47 +08:00
pybind Fix GEMM+AR fusion on blackwell (#5563) 2025-07-09 08:48:47 +08:00
runtime Fix GEMM+AR fusion on blackwell (#5563) 2025-07-09 08:48:47 +08:00
testing fix: Improve chunking test and skip empty kernel calls (#5710) 2025-07-04 09:08:15 +02:00
thop [TRTLLM-5881] feat: Integrate TRT-LLM Gen FP4 block scale MoE with Pytorch workflow kernel autotuner (#5764) 2025-07-09 08:21:58 +01:00
CMakeLists.txt Fix GEMM+AR fusion on blackwell (#5563) 2025-07-09 08:48:47 +08:00