..
__init__.py
Update TensorRT-LLM ( #2755 )
2025-02-11 03:01:00 +00:00
_util.py
[None][chore] remove executor config in kv cache creator ( #7526 )
2025-09-10 21:14:44 +08:00
config_utils.py
[None][fix] fix hunyuan_moe init bug ( #7502 )
2025-09-04 03:06:00 -04:00
config.py
[None][chore] expose tokens_per_block into KvCacheConfig ( #5911 )
2025-09-07 21:14:10 -04:00
cuda_graph_runner.py
[ https://nvbugs/5516666 ][fix] cherrypick fix to the CUDA graph warmup issue when using speculative decoding ( #7737 )
2025-09-17 06:24:20 +08:00
executor_request_queue.py
[None][opt] Balance the request based on number of tokens in AttentionDP ( #7183 )
2025-08-27 11:16:12 +08:00
finish_reason.py
[TRTLLM-5974][feat] Support disaggregated serving in TRTLLM Sampler ( #5328 )
2025-06-25 17:41:36 +02:00
grammar_matcher.py
[TRTLLM-7028][feat] Enable guided decoding with speculative decoding (part 2: one-model engine) ( #6948 )
2025-09-03 15:16:11 -07:00
guided_decoder.py
[TRTLLM-7027][feat] Fuse d2t to logitsBitmaskKernel and fix a race condition in one-model spec ( #7481 )
2025-09-04 23:30:14 +08:00
handle_logits.py
[TRTLLM-7155][feat] Unify sampler handle logits implementation. ( #6867 )
2025-08-22 08:09:30 +02:00
kv_cache_connector.py
[None][chore] add TorchLlmArgs to the connector api ( #7493 )
2025-09-09 09:05:59 -04:00
kv_cache_transceiver.py
[TRTLLM-7361][feat] KV cache transfer for uneven pp ( #7117 )
2025-09-08 13:37:46 -04:00
layerwise_nvtx_marker.py
Update TensorRT-LLM ( #2849 )
2025-03-04 18:44:00 +08:00
llm_request.py
[ https://nvbugs/5508536 ][fix] Revert #7041 : Move stop_criteria to sample_async ( #7041 ) ( #7796 )
2025-09-17 21:27:01 -04:00
make_decoding_batch_input_output.py
feat: Optimize TRTLLM Sampler perf single beam single step ( #5550 )
2025-07-07 15:44:47 +02:00
mamba_cache_manager.py
[None] [chore] Mamba cache in separate file ( #6796 )
2025-08-15 13:42:51 +03:00
model_engine.py
[ https://nvbugs/5516666 ][fix] cherrypick fix to the CUDA graph warmup issue when using speculative decoding ( #7737 )
2025-09-17 06:24:20 +08:00
py_executor_creator.py
[TRTLLM-4629] [feat] Add support of CUDA13 and sm103 devices ( #7568 )
2025-09-16 09:56:18 +08:00
py_executor.py
[TRTLLM-6668][feat] Enable overlap scheduler for two-model spec decoding ( #7651 )
2025-09-16 07:33:44 +08:00
resource_manager.py
[ https://nvbugs/5480289 ][fix] release slot manager in mtp MTPHiddenStatesManager ( #7340 )
2025-09-02 19:37:51 -07:00
sampler.py
[ https://nvbugs/5508536 ][fix] Revert #7041 : Move stop_criteria to sample_async ( #7041 ) ( #7796 )
2025-09-17 21:27:01 -04:00
scheduler.py
[TRTLLM-6392][feat] Support turning on/off spec decoding dynamically ( #6363 )
2025-07-31 15:31:39 -04:00
seq_slot_manager.py
[ https://nvbugs/5394392 ][fix] Enlarge scheduler capacity under disagg bs == 1 ( #6537 )
2025-08-15 09:52:06 -07:00