TensorRT-LLMs/tensorrt_llm/_torch/pyexecutor
Shunkangz ff4047414b
[None][opt] Balance the request based on number of tokens in AttentionDP (#7183)
Signed-off-by: Shunkang <182541032+Shunkangz@users.noreply.github.co>
Co-authored-by: Shunkang <182541032+Shunkangz@users.noreply.github.co>
2025-08-27 11:16:12 +08:00
..
__init__.py Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
_util.py fix/improve kvcache allocation in PyTorch runtime (#5933) 2025-08-26 12:40:22 +08:00
config_utils.py [None] [feat] Add Tencent HunYuanMoEV1 model support (#5521) 2025-08-15 06:56:44 +08:00
config.py [TRTLLM-7326][feat] Add standalone multimodal encoder (#6743) 2025-08-19 21:42:50 -07:00
cuda_graph_runner.py [None][refactor] refactor the CUDA graph runner to manage all CUDA graphs (#6846) 2025-08-25 20:52:05 +08:00
executor_request_queue.py [None][opt] Balance the request based on number of tokens in AttentionDP (#7183) 2025-08-27 11:16:12 +08:00
finish_reason.py [TRTLLM-5974][feat] Support disaggregated serving in TRTLLM Sampler (#5328) 2025-06-25 17:41:36 +02:00
grammar_matcher.py [TRTLLM-6409][feat] Enable guided decoding with speculative decoding (part 1: two-model engine) (#6300) 2025-08-07 05:53:48 -04:00
guided_decoder.py [TRTLLM-2285][feat] Enable guided decoding with CUDA graph padding and draft model chunked prefill (#6774) 2025-08-12 09:30:06 +08:00
handle_logits.py [TRTLLM-7155][feat] Unify sampler handle logits implementation. (#6867) 2025-08-22 08:09:30 +02:00
kv_cache_transceiver.py feat: Add support for disaggregation with pp with pytorch backend (#6369) 2025-07-30 09:42:13 -04:00
layerwise_nvtx_marker.py Update TensorRT-LLM (#2849) 2025-03-04 18:44:00 +08:00
llm_request.py [TRTLLM-7346][fix] Improve performance of PyTorchModelEngine._get_lora_params_from_requests (#7033) 2025-08-25 10:37:40 +03:00
make_decoding_batch_input_output.py feat: Optimize TRTLLM Sampler perf single beam single step (#5550) 2025-07-07 15:44:47 +02:00
mamba_cache_manager.py [None] [chore] Mamba cache in separate file (#6796) 2025-08-15 13:42:51 +03:00
model_engine.py [TRTLLM-6633][feat] Padding for piecewise cudagraph (#6750) 2025-08-26 18:31:33 -04:00
py_executor_creator.py fix/improve kvcache allocation in PyTorch runtime (#5933) 2025-08-26 12:40:22 +08:00
py_executor.py [None][opt] Balance the request based on number of tokens in AttentionDP (#7183) 2025-08-27 11:16:12 +08:00
resource_manager.py fix/improve kvcache allocation in PyTorch runtime (#5933) 2025-08-26 12:40:22 +08:00
sampler.py [TRTLLM-7155][feat] Unify sampler handle logits implementation. (#6867) 2025-08-22 08:09:30 +02:00
scheduler.py [TRTLLM-6392][feat] Support turning on/off spec decoding dynamically (#6363) 2025-07-31 15:31:39 -04:00
seq_slot_manager.py [https://nvbugs/5394392][fix] Enlarge scheduler capacity under disagg bs == 1 (#6537) 2025-08-15 09:52:06 -07:00