TensorRT-LLMs/tensorrt_llm/_torch/pyexecutor
HuiGao-NV f4059c6e2e
Add test case for kv memory estimation (#4158)
* Add test case for kv memory estimation
* Dump running log into file and parse kv cache memory size from file
* Set bigger peak memory size for mixed percision case and test_ptp_quickstart_advanced_eagle3 case
* Revert change to usage of fraction
* use context manager to guard temp files

Signed-off-by: Hui Gao <huig@nvidia.com>
2025-05-14 18:39:25 +08:00
..
__init__.py Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
_util.py Add test case for kv memory estimation (#4158) 2025-05-14 18:39:25 +08:00
config.py [TRTLLM-5050][feat] Enable per-request stats with PyT backend (#4156) 2025-05-12 21:35:15 -04:00
cuda_graph_runner.py [TRTLLM-3105][feat] Add Piecewise CUDA Graph Support (#3804) 2025-05-09 11:04:01 +08:00
decoder.py feat: adopt new logprob definition in PyTorch flow (#4057) 2025-05-08 20:16:40 +08:00
guided_decoder.py feat: Support the Structural Tag in guided decoding (#4066) 2025-05-12 17:24:50 +08:00
kv_cache_transceiver.py cacheTransceiver buffer manager (#3798) 2025-04-27 11:48:15 +08:00
layerwise_nvtx_marker.py Update TensorRT-LLM (#2849) 2025-03-04 18:44:00 +08:00
llm_request.py feat: adopt new logprob definition in PyTorch flow (#4057) 2025-05-08 20:16:40 +08:00
model_engine.py feat: Prefetch safetensors files before loading them (#4140) 2025-05-13 13:35:30 +08:00
py_executor_creator.py fix: [https://nvbugspro.nvidia.com/bug/5238626] illegal memory address when running llama 4 with cuda graph enabled (#4101) 2025-05-13 14:58:54 +08:00
py_executor.py fix: Merge PP overlap and non-overlap executor loop (#3878) 2025-05-14 06:04:36 +08:00
resource_manager.py [fix] Fix add_dummy_requests for spec decoding cases (#4084) 2025-05-09 16:52:51 +08:00
scheduler.py refactor: collect executor and decoder states into dataclass (#3234) 2025-04-15 16:31:45 +08:00
seq_slot_manager.py fix: skip add new slot if request has slot 0 (#3991) 2025-05-06 07:46:39 +02:00