TensorRT-LLMs/cpp/tensorrt_llm
nv-guomingz dd959de0fd
chore: update internal_cutlass_kernels. (#3973)
Signed-off-by: nv-guomingz <37257613+nv-guomingz@users.noreply.github.com>
Co-authored-by: nv-guomingz <37257613+nv-guomingz@users.noreply.github.com>
2025-04-30 22:13:17 +08:00
..
batch_manager cacheTransceiver buffer manager (#3798) 2025-04-27 11:48:15 +08:00
common fix: [https://nvbugspro.nvidia.com/bug/5243482] If FlashMLA is used, the existence of FMHA based MLA kernels should not be checked. (#3862) 2025-04-30 14:27:38 +08:00
cutlass_extensions/include/cutlass_extensions TRTLLM-4624 feat: Add nvfp4 gemm and moe support for SM120 (#3770) 2025-04-29 11:19:11 -04:00
executor cacheTransceiver buffer manager (#3798) 2025-04-27 11:48:15 +08:00
executor_worker Update TensorRT-LLM (#2792) 2025-02-18 21:27:39 +08:00
kernels chore: update internal_cutlass_kernels. (#3973) 2025-04-30 22:13:17 +08:00
layers fix: Eagle decoding (#3456) 2025-04-11 22:06:38 +08:00
plugins TRTLLM-4624 feat: Add nvfp4 gemm and moe support for SM120 (#3770) 2025-04-29 11:19:11 -04:00
pybind cacheTransceiver buffer manager (#3798) 2025-04-27 11:48:15 +08:00
runtime fix: 5197419 and removed unused runtime kernels (#3631) 2025-04-23 18:04:50 +02:00
thop fix: [https://nvbugspro.nvidia.com/bug/5243482] If FlashMLA is used, the existence of FMHA based MLA kernels should not be checked. (#3862) 2025-04-30 14:27:38 +08:00
CMakeLists.txt infra: open source XQA kernels (#3762) 2025-04-30 18:05:15 +08:00