TensorRT-LLMs/tensorrt_llm/_torch/pyexecutor
mpikulski 87eb5086fb
[None][fix] restore list[list[list[int]]] in add_token (#8502)
Signed-off-by: ixlmar <206748156+ixlmar@users.noreply.github.com>
2025-10-20 22:34:57 -04:00
..
__init__.py Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
_util.py [TRTLLM-6780][fix] Add multimodal data to dummy requests during memory profiling (#7539) 2025-10-16 17:49:22 +02:00
config_utils.py [None][chore] Refine qwen3-next implementation. (#8064) 2025-09-30 15:05:13 -04:00
config.py [None][chore] Cherry-pick from (#7598) Make low_precision_combine as a llm arg (#7898) 2025-09-28 22:32:33 -04:00
cuda_graph_runner.py [None][feat] reuse cudagraph memory pool in normal forward flow (#8095) 2025-10-16 07:08:44 +08:00
executor_request_queue.py [TRTLLM-7349][feat] Adding new orchestrator type -- ray (#7520) 2025-10-04 08:12:24 +08:00
finish_reason.py [TRTLLM-5974][feat] Support disaggregated serving in TRTLLM Sampler (#5328) 2025-06-25 17:41:36 +02:00
grammar_matcher.py [TRTLLM-8209][feat] Support new structural tag API (upgrade XGrammar to 0.1.25) (#7893) 2025-09-23 09:10:09 +08:00
guided_decoder.py [None][fix] Disable torch.compile for CapturableGuidedDecoder (#7871) 2025-09-22 10:04:30 +08:00
handle_additional_outputs.py [TRTLLM-4517] [feat] Additional model outputs (#7206) 2025-10-13 15:33:18 +02:00
handle_logits.py [TRTLLM-8031][feat] Add chunked return_generation_logits logic (#7831) 2025-10-01 12:47:07 -04:00
kv_cache_connector.py [None][feat] Enable CUDA graph support for KvConnectorWorker API (#8275) 2025-10-17 18:09:03 -04:00
kv_cache_transceiver.py [TRTLLM-7964][infra] Set nixl to default cache transceiver backend (#7926) 2025-10-19 19:24:43 +08:00
layerwise_nvtx_marker.py Update TensorRT-LLM (#2849) 2025-03-04 18:44:00 +08:00
llm_request.py [None][feat] Support cached tokens for Openai server (#7637) 2025-10-16 20:51:37 +08:00
make_decoding_batch_input_output.py feat: Optimize TRTLLM Sampler perf single beam single step (#5550) 2025-07-07 15:44:47 +02:00
mamba_cache_manager.py [TRTLLM-8477][chore] Replace KvCacheConfigCpp with KvCacheConfig inside PyExecutor (#8259) 2025-10-13 14:55:36 +08:00
model_engine.py [None][feat] Enable CUDA graph support for KvConnectorWorker API (#8275) 2025-10-17 18:09:03 -04:00
model_loader.py [TRTLLM-8536][feat] Add the sparse attention framework and one use case--RocketKV support (#8086) 2025-10-14 08:23:16 -07:00
py_executor_creator.py [TRTLLM-8480][chore] clean create_py_executor API (#8412) 2025-10-17 23:52:02 -04:00
py_executor.py [TRTLLM-8436][feat] batched sampling and top-k logprobs improvements (#8398) 2025-10-20 11:15:41 +02:00
resource_manager.py [None][fix] Fix KV event consumption (#6346) 2025-10-18 15:41:26 -07:00
sampler.py [None][fix] restore list[list[list[int]]] in add_token (#8502) 2025-10-20 22:34:57 -04:00
sampling_utils.py [TRTLLM-8436][feat] batched sampling and top-k logprobs improvements (#8398) 2025-10-20 11:15:41 +02:00
scheduler.py [https://nvbugs/5528405][fix] Set up draft_tokens before scheduling (#7903) 2025-09-24 09:56:17 +08:00
seq_slot_manager.py [https://nvbugs/5394392][fix] Enlarge scheduler capacity under disagg bs == 1 (#6537) 2025-08-15 09:52:06 -07:00