TensorRT-LLMs/cpp/tensorrt_llm
Jinyang Yuan 20d0649f19
[feat] Support XQA-based MLA on SM120 (#4858)
Signed-off-by: Yao Yao <lowsfer@users.noreply.github.com>
Signed-off-by: peaceh <103117813+peaceh-nv@users.noreply.github.com>
Signed-off-by: Jinyang Yuan <154768711+jinyangyuan-nvidia@users.noreply.github.com>
Co-authored-by: Yao Yao <lowsfer@users.noreply.github.com>
Co-authored-by: peaceh-nv <103117813+peaceh-nv@users.noreply.github.com>
2025-06-06 22:32:49 +08:00
..
batch_manager feat: cache reuse support (selective cache transfer) in mla cache formatter (#4749) 2025-06-04 09:56:31 +08:00
common [feat] Support XQA-based MLA on SM120 (#4858) 2025-06-06 22:32:49 +08:00
cutlass_extensions/include/cutlass_extensions chore: guardword clean for header file. (#4540) 2025-05-23 10:08:14 +08:00
executor feature: KV Cache GPUDirect Storage (#3209) 2025-05-28 23:27:43 +00:00
executor_worker Update TensorRT-LLM (#2792) 2025-02-18 21:27:39 +08:00
kernels [feat] Support XQA-based MLA on SM120 (#4858) 2025-06-06 22:32:49 +08:00
layers Feat: Variable-Beam-Width-Search (VBWS) part4 (#3979) 2025-05-12 22:32:29 +02:00
plugins chore: Mass integration of release/0.20. (#4871) 2025-06-04 14:12:27 +08:00
pybind refactor: Separate DecoderState from GptDecoderBatched (#4700) 2025-06-03 09:42:01 +02:00
runtime [TRTLLM-4647][fix] Fix the no fusion allreduce hanging (#4594) 2025-06-04 18:26:13 -07:00
testing refactor: Move ModelSpec to core library (#3980) 2025-05-04 01:39:09 +08:00
thop [TRTLLM-4647][fix] Fix the no fusion allreduce hanging (#4594) 2025-06-04 18:26:13 -07:00
CMakeLists.txt feat: update DeepSeek FP8 TRT-LLM Gen cubins (#4643) 2025-06-03 14:07:54 -07:00