TensorRT-LLMs/tensorrt_llm/_torch/pyexecutor
zhhuang-nv 97bc680cd8
feat: support kv cache reuse for MLA (#3571)
* support kv cache reuse for MLA

load compressed_kv and k_pe and do up-projection
use 192/128 head size MLA context kernel
support Blackwell and Hopper now

Signed-off-by: Zhen Huang <145532724+zhhuang-nv@users.noreply.github.com>

* add CI test

Signed-off-by: Zhen Huang <145532724+zhhuang-nv@users.noreply.github.com>

* fix: set k_pe head_num to 1 for kernel 2 and kernel 2V2

Signed-off-by: Mingyang Jiang <13463932+jmydurant@users.noreply.github.com>

* resolve comments

Signed-off-by: Zhen Huang <145532724+zhhuang-nv@users.noreply.github.com>

* use GPTJ style RoPE for MLA

Signed-off-by: Zhen Huang <145532724+zhhuang-nv@users.noreply.github.com>

* fix rebase error and some docs

Signed-off-by: Zhen Huang <145532724+zhhuang-nv@users.noreply.github.com>

* fix kv_lens

Signed-off-by: Zhen Huang <145532724+zhhuang-nv@users.noreply.github.com>

* tiny fix

Signed-off-by: Zhen Huang <145532724+zhhuang-nv@users.noreply.github.com>

* fix torch compile

Signed-off-by: Zhen Huang <145532724+zhhuang-nv@users.noreply.github.com>

* fix: use normal device memory instead of pinned memory for unit test

Signed-off-by: Mingyang Jiang <13463932+jmydurant@users.noreply.github.com>

* fix L0 tests

Signed-off-by: Zhen Huang <145532724+zhhuang-nv@users.noreply.github.com>

* fix torch compile after rebase

Signed-off-by: Zhen Huang <145532724+zhhuang-nv@users.noreply.github.com>

* resolve comments

Signed-off-by: Zhen Huang <145532724+zhhuang-nv@users.noreply.github.com>

* resolve comments again

Signed-off-by: Zhen Huang <145532724+zhhuang-nv@users.noreply.github.com>

---------

Signed-off-by: Zhen Huang <145532724+zhhuang-nv@users.noreply.github.com>
Signed-off-by: Mingyang Jiang <13463932+jmydurant@users.noreply.github.com>
Signed-off-by: zhhuang-nv <145532724+zhhuang-nv@users.noreply.github.com>
Co-authored-by: Mingyang Jiang <13463932+jmydurant@users.noreply.github.com>
2025-05-15 15:22:21 +08:00
..
__init__.py Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
_util.py feat: support kv cache reuse for MLA (#3571) 2025-05-15 15:22:21 +08:00
config_utils.py feat: support kv cache reuse for MLA (#3571) 2025-05-15 15:22:21 +08:00
config.py Breaking change: perf: Enable scheduling overlap by default (#4174) 2025-05-15 14:27:36 +08:00
cuda_graph_runner.py [TRTLLM-3105][feat] Add Piecewise CUDA Graph Support (#3804) 2025-05-09 11:04:01 +08:00
decoder.py Breaking change: perf: Enable scheduling overlap by default (#4174) 2025-05-15 14:27:36 +08:00
guided_decoder.py feat: Support the Structural Tag in guided decoding (#4066) 2025-05-12 17:24:50 +08:00
kv_cache_transceiver.py cacheTransceiver buffer manager (#3798) 2025-04-27 11:48:15 +08:00
layerwise_nvtx_marker.py Update TensorRT-LLM (#2849) 2025-03-04 18:44:00 +08:00
llm_request.py feat: adopt new logprob definition in PyTorch flow (#4057) 2025-05-08 20:16:40 +08:00
model_engine.py feat: support kv cache reuse for MLA (#3571) 2025-05-15 15:22:21 +08:00
py_executor_creator.py feat: support kv cache reuse for MLA (#3571) 2025-05-15 15:22:21 +08:00
py_executor.py Breaking change: perf: Enable scheduling overlap by default (#4174) 2025-05-15 14:27:36 +08:00
resource_manager.py [fix] Fix add_dummy_requests for spec decoding cases (#4084) 2025-05-09 16:52:51 +08:00
scheduler.py refactor: collect executor and decoder states into dataclass (#3234) 2025-04-15 16:31:45 +08:00
seq_slot_manager.py fix: skip add new slot if request has slot 0 (#3991) 2025-05-06 07:46:39 +02:00