TensorRT-LLMs/cpp/tensorrt_llm
qsang-nv 5a01ba5260
use cu for fmha_v2 (#4694)
Signed-off-by: Qidi Sang <200703406+qsang-nv@users.noreply.github.com>
2025-06-15 18:40:44 +08:00
..
batch_manager refactor: Speculative decoding buffers (#5091) 2025-06-14 11:39:32 +02:00
common [nvbug 5333996 ][fix] Unload XQA cubins early to avoid static lifetime (#5133) 2025-06-13 15:53:29 +08:00
cutlass_extensions/include/cutlass_extensions refactoring: port customized kernels with public cutlass version (#5027) 2025-06-13 16:19:31 +08:00
executor ucxx only use ucp_feature_tag to aviod some issuse on some platform (#4994) 2025-06-13 19:14:25 +08:00
executor_worker Update TensorRT-LLM (#2792) 2025-02-18 21:27:39 +08:00
kernels use cu for fmha_v2 (#4694) 2025-06-15 18:40:44 +08:00
layers Feat: Variable-Beam-Width-Search (VBWS) part4 (#3979) 2025-05-12 22:32:29 +02:00
plugins feat: Add support for fp8 rowwise quantization (#4876) 2025-06-14 06:37:48 -07:00
pybind refactor: Speculative decoding buffers (#5091) 2025-06-14 11:39:32 +02:00
runtime refactor: Speculative decoding buffers (#5091) 2025-06-14 11:39:32 +02:00
testing refactor: Move ModelSpec to core library (#3980) 2025-05-04 01:39:09 +08:00
thop Feat/ds r1 min latency opt round3, add router gemm, fused a gemm, PDL (#4560) 2025-06-14 17:36:22 +08:00
CMakeLists.txt refactoring: port customized kernels with public cutlass version (#5027) 2025-06-13 16:19:31 +08:00