mirror of
https://github.com/NVIDIA/TensorRT-LLM.git
synced 2026-01-29 15:15:08 +08:00
* optimize kv cache reuse workflow for MLA write kv cache first and only call up-projection GEMM once relax contiguous requirements of k/v for setting paged kv cache return two contiguous tensors when loading MLA KV Cache Signed-off-by: Zhen Huang <145532724+zhhuang-nv@users.noreply.github.com> * support fp8 kv cache for MLA kv cache reuse Signed-off-by: Zhen Huang <145532724+zhhuang-nv@users.noreply.github.com> * resolve comments Signed-off-by: Zhen Huang <145532724+zhhuang-nv@users.noreply.github.com> --------- Signed-off-by: Zhen Huang <145532724+zhhuang-nv@users.noreply.github.com> |
||
|---|---|---|
| .. | ||
| __init__.py | ||
| _util.py | ||
| config_utils.py | ||
| config.py | ||
| cuda_graph_runner.py | ||
| guided_decoder.py | ||
| kv_cache_transceiver.py | ||
| layerwise_nvtx_marker.py | ||
| llm_request.py | ||
| model_engine.py | ||
| py_executor_creator.py | ||
| py_executor.py | ||
| resource_manager.py | ||
| sampler.py | ||
| scheduler.py | ||
| seq_slot_manager.py | ||