TensorRT-LLMs/cpp/tensorrt_llm
zhhuang-nv 8452775db8
[TRTLLM-5070][feat] Support FP8 KV Cache Reuse for MLA (#4535)
* optimize kv cache reuse workflow for MLA

write kv cache first and only call up-projection GEMM once
relax contiguous requirements of k/v for setting paged kv cache
return two contiguous tensors when loading MLA KV Cache

Signed-off-by: Zhen Huang <145532724+zhhuang-nv@users.noreply.github.com>

* support fp8 kv cache for MLA kv cache reuse

Signed-off-by: Zhen Huang <145532724+zhhuang-nv@users.noreply.github.com>

* resolve comments

Signed-off-by: Zhen Huang <145532724+zhhuang-nv@users.noreply.github.com>

---------

Signed-off-by: Zhen Huang <145532724+zhhuang-nv@users.noreply.github.com>
2025-05-23 19:47:50 +08:00
..
batch_manager [feat][TRTLLM-5018] Dis serving python runtime trt backend (#4243) 2025-05-22 22:01:06 -04:00
common [feat][TRTLLM-5018] Dis serving python runtime trt backend (#4243) 2025-05-22 22:01:06 -04:00
cutlass_extensions/include/cutlass_extensions chore: guardword clean for header file. (#4540) 2025-05-23 10:08:14 +08:00
executor Agent interface impl for NIXL (#4125) 2025-05-22 09:09:41 +08:00
executor_worker Update TensorRT-LLM (#2792) 2025-02-18 21:27:39 +08:00
kernels [TRTLLM-5070][feat] Support FP8 KV Cache Reuse for MLA (#4535) 2025-05-23 19:47:50 +08:00
layers Feat: Variable-Beam-Width-Search (VBWS) part4 (#3979) 2025-05-12 22:32:29 +02:00
plugins Fix bias shape in weightOnlyGroupwiseQuantMatmulPlugin for TRT workflow (#4348) 2025-05-16 10:02:30 +08:00
pybind [feat][TRTLLM-5018] Dis serving python runtime trt backend (#4243) 2025-05-22 22:01:06 -04:00
runtime fix[nvbug-5295425]: [TRTLLM-5385] fix race condition in MoeLoadBalancer (#4573) 2025-05-23 09:24:23 +08:00
testing refactor: Move ModelSpec to core library (#3980) 2025-05-04 01:39:09 +08:00
thop [TRTLLM-5070][feat] Support FP8 KV Cache Reuse for MLA (#4535) 2025-05-23 19:47:50 +08:00
CMakeLists.txt feat: NIXL interface integration (#3934) 2025-05-19 18:18:22 +08:00