TensorRT-LLMs/tensorrt_llm/_torch/pyexecutor
Leslie Fang daa31d78f4
[https://nvbugs/5652552][fix] Log the llm args for main branch (#9120)
Signed-off-by: leslie-fang25 <leslief@nvidia.com>
2025-11-14 07:43:21 +08:00
..
__init__.py Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
_util.py [TRTLLM-7723][feat] sampling using FlashInfer.sampling (#8581) 2025-11-11 03:21:19 -08:00
config_utils.py [None][fix] fix config loading for DeepSeek-V3.2 in trtllm-bench (#8729) 2025-10-29 05:17:16 -07:00
cuda_graph_runner.py [TRTLLM-8521][chore] remove circular dependency between model engine and cuda graph runner (#7572) 2025-11-11 10:13:45 -08:00
executor_request_queue.py [None][chore] Remove is_disaggregated param in executor request queue (#9049) 2025-11-12 13:37:15 -05:00
finish_reason.py [TRTLLM-5974][feat] Support disaggregated serving in TRTLLM Sampler (#5328) 2025-06-25 17:41:36 +02:00
grammar_matcher.py [TRTLLM-8763][chore] Deprecate pybind based GuidedDecodingConfig usage in torch backend (#8717) 2025-10-29 20:37:14 +08:00
guided_decoder.py [TRTLLM-8763][chore] Deprecate pybind based GuidedDecodingConfig usage in torch backend (#8717) 2025-10-29 20:37:14 +08:00
handle_additional_outputs.py [TRTLLM-4517] [feat] Additional model outputs (#7206) 2025-10-13 15:33:18 +02:00
handle_logits.py [TRTLLM-8031][feat] Add chunked return_generation_logits logic (#7831) 2025-10-01 12:47:07 -04:00
kv_cache_connector.py [None][feat] Support KV Connector with Disagg Prefill Worker (#8246) 2025-10-24 11:09:06 -07:00
kv_cache_transceiver.py [TRTLLM-7078][chore] optimal kvcache transfer for VWSA (#7952) 2025-10-24 08:58:16 -04:00
layerwise_nvtx_marker.py Update TensorRT-LLM (#2849) 2025-03-04 18:44:00 +08:00
llm_request.py [None][feat] Return logprobs incrementally in torch backend (#8785) 2025-11-07 10:23:39 +08:00
make_decoding_batch_input_output.py feat: Optimize TRTLLM Sampler perf single beam single step (#5550) 2025-07-07 15:44:47 +02:00
mamba_cache_manager.py [TRTLLM-8477][chore] Replace KvCacheConfigCpp with KvCacheConfig inside PyExecutor (#8259) 2025-10-13 14:55:36 +08:00
model_engine.py [TRTLLM-8084][feat] Enhance the overlap shceduler for two-model spec decoding (#8706) 2025-11-13 10:20:16 -05:00
model_loader.py [TRTLLM-9065][chore] remove PyTorchConfig completely (#8856) 2025-11-06 22:37:03 -08:00
py_executor_creator.py [https://nvbugs/5652552][fix] Log the llm args for main branch (#9120) 2025-11-14 07:43:21 +08:00
py_executor.py [TRTLLM-8084][feat] Enhance the overlap shceduler for two-model spec decoding (#8706) 2025-11-13 10:20:16 -05:00
resource_manager.py [#6507][fix] Fix precision issue due to KV layout mismatch for split/concat kernels (#6917) 2025-11-13 12:14:58 +08:00
sampler.py [TRTLLM-8084][feat] Enhance the overlap shceduler for two-model spec decoding (#8706) 2025-11-13 10:20:16 -05:00
sampling_utils_flashinfer.py [TRTLLM-8377][test] unit tests for TorchSampler batched sampling (#9012) 2025-11-11 07:16:42 -08:00
sampling_utils.py [TRTLLM-7723][feat] sampling using FlashInfer.sampling (#8581) 2025-11-11 03:21:19 -08:00
scheduler.py [TRTLLM-8483][chore] Refine scheduler_config and peft_cache_config in create_py_executor (#8451) 2025-10-22 08:33:48 +08:00
seq_slot_manager.py [https://nvbugs/5394392][fix] Enlarge scheduler capacity under disagg bs == 1 (#6537) 2025-08-15 09:52:06 -07:00