mirror of
https://github.com/NVIDIA/TensorRT-LLM.git
synced 2026-01-14 06:27:45 +08:00
* docs: Add KV Cache Management documentation * Introduced a new document detailing the hierarchy and event system for KV cache management, including definitions for Pool, Block, and Page. * Updated the index.rst to include a reference to the new kv-cache-management.md file. Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com> * Update docs/source/advanced/kv-cache-management.md Co-authored-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com> Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com> * Update KV Cache Pool Management Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com> * docs: Addcross-file links Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com> * docs: Clarify tokens_per_block Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com> * docs: Clarify acronyms Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com> --------- Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com> Co-authored-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com> |
||
|---|---|---|
| .. | ||
| images | ||
| disaggregated-service.md | ||
| executor.md | ||
| expert-parallelism.md | ||
| gpt-attention.md | ||
| gpt-runtime.md | ||
| graph-rewriting.md | ||
| kv-cache-management.md | ||
| kv-cache-reuse.md | ||
| lora.md | ||
| lowprecision-pcie-allreduce.md | ||
| speculative-decoding.md | ||
| weight-streaming.md | ||