mirror of
https://github.com/NVIDIA/TensorRT-LLM.git
synced 2026-01-14 06:27:45 +08:00
This merge request attempts to support more SWA KV cache functionality inside the KV cache manager. Before this merge request, the KV cache for sliding window attention (SWA) only holds "window size" number of blocks and reuse them in a cyclic manner. We will not be able to utilize more GPU memory with this design, leading to a limited max batch size throughput. Additionally, we will not be able to support KV cache reuse with this design. In this MR, we change such behavior to let the manager write blocks in a linear manner. With a linear block writing behavior, as the attention window moves on, the out-of-window (OOW) blocks will be detached. Right now for the sake of a correct feature first, we directly offload the OOW block from the primary block pool (GPU memory) to the secondary block pool (host memory). We will improve this in the future by delegating the block movement to the eviction policy. KV cache reuse for SWA is not developed in this merge request and will be amended in a follow-up merge request. Writing the blocks linearly, the maximum number of blocks allocated for a sequence(`GenerationRequest`) is the "max sequence length" specified. The `GenerationRequest` that stores the cache block bookkeeping structure will now keep "max sequence length" tokens of blocks. Given the above, main changes are (more context in the MR): - Remove "cyclic" concept under the kv cache manager, such concept originally guards the block reuse under kv cache manager. - Add detach mechanism and have it under `KVCacheManager::addToken`. Please note that detach is still guarded off for SWA when reuse is enabled. A follow-up merge request will proceed to improve this. - Enforce "max sequence length" to be a non-optional parameter to the `KVCacheManager`/`BlockManager` - Let all window size resource pool get identical proportion of memory - Fix free memory calculation under `resource_manager.py` Signed-off-by: eopXD <yuehtingc@nvidia.com> Co-authored-by: Tomer Asida <tasida@nvidia.com> |
||
|---|---|---|
| .. | ||
| defs | ||
| perf_configs | ||
| test_input_files | ||
| test_lists | ||
| README.md | ||
TensorRT LLM test definitions
The following subfolder contains test definitions for Tensorrt LLM.
Directory structure
.
└── integration # Root directory for integration tests
├── defs # Test definitions
├── perf_configs # Configs for perf tests
└── test_lists # Test lists
├── test-db # Test-DB that is the test list convention adopted by CI
├── dev # Other test lists used by TRT LLM developers
├── qa # Test lists used by QA
└── waives.txt # Test waive list
- To run perf tests, you also need to first build the cpp benchmark by calling
build_wheel.pywith--benchmarksflag.
Run perf tests
All the perf test names are in the form of perf/test_perf.py::test_perf[...] where the ... part is the test parameters.
Below are some specific pytest options used for perf tests
# execute these in the tensorrt-llm source repo root dir.
# install dependencies, do not need to do it every time if already installed.
pip install -r requirements-dev.txt
# example 1: run a test case
# For example, if QA reports a perf bug for `perf/test_perf.py::test_perf[llama_7b-cppmanager-exe-plugin_ifb-float16-input_output_len:128,128,+512,32]`, then you can repro it by running:
cd LLM_ROOT/tests/integration/defs
echo "perf/test_perf.py::test_perf[llama_7b-cppmanager-exe-plugin_ifb-float16-input_output_len:128,128,+512,32]" > perf.txt
pytest --perf --test-list=perf.txt --output-dir=/workspace/test-log --perf-log-formats csv --perf-log-formats yaml
The captured perf metrics will be saved in /workspace/test-log/perf_scripts_test_results.csv or /workspace/test-log/perf_scripts_test_results.yaml depends on the option --perf-log-formats, and the test logs are saved in /workspace/test-log/result.xmk. Currently, we capture these perf metrics:
test_perf_metric_build_time: The engine building time in seconds.test_perf_metric_build_peak_cpu_memory: The build-phase peak CPU mem usage in MB.test_perf_metric_build_peak_gpu_memory: The build-phase peak GPU mem usage in MB.test_perf_metric_inference_time: The inference latency in ms.test_perf_metric_inference_peak_gpu_memory: The inference-phase peak GPU mem usage in GB.test_perf_metric_context_gpu_memory: The context GPU mem usage in MB.
Common Issues and solutions
- No package 'libffi' found
Install libffi by
sudo apt-get install libffi-devand rerun.