TensorRT-LLMs/tensorrt_llm/_torch/pyexecutor
tomeras91 f121f13ddf
[nvbug 5325284][fix] Increase Nemotron-H warmup request robustness (#4954)
Signed-off-by: Tomer Asida <57313761+tomeras91@users.noreply.github.com>
2025-06-10 11:09:37 +03:00
..
__init__.py Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
_util.py chore: Mass integration of release/0.20 (#4898) 2025-06-08 23:26:26 +08:00
config_utils.py feat: support kv cache reuse for MLA (#3571) 2025-05-15 15:22:21 +08:00
config.py Cherry pick feat/llama4 to main (#4739) 2025-05-30 05:28:40 +08:00
cuda_graph_runner.py feat: Integration of Fused QKNorm+RoPE. (#4611) 2025-05-28 11:20:45 +08:00
guided_decoder.py feat: port MakeDecodingBatchInputOutput to python in TRTLLMSampler (#4828) 2025-06-10 07:28:34 +08:00
handle_context_logits.py feat: port MakeDecodingBatchInputOutput to python in TRTLLMSampler (#4828) 2025-06-10 07:28:34 +08:00
handle_generation_logits.py [TRTLLM-4987][feat] Support generation logits in TRTLLMSampler (#4819) 2025-06-09 06:30:01 +03:00
kv_cache_transceiver.py Agent interface impl for NIXL (#4125) 2025-05-22 09:09:41 +08:00
layerwise_nvtx_marker.py Update TensorRT-LLM (#2849) 2025-03-04 18:44:00 +08:00
llm_request.py [TRTLLM-5007][feat] Add multimodal hashing support (image hashing) (#4145) 2025-06-10 01:59:56 +08:00
make_decoding_batch_input_output.py feat: port MakeDecodingBatchInputOutput to python in TRTLLMSampler (#4828) 2025-06-10 07:28:34 +08:00
model_engine.py chore: Refine weight prefetching. (#4893) 2025-06-09 21:24:16 +08:00
py_executor_creator.py fix: handle OOMs during KV cache estimation (#4690) 2025-06-05 10:02:26 +02:00
py_executor.py feat: port MakeDecodingBatchInputOutput to python in TRTLLMSampler (#4828) 2025-06-10 07:28:34 +08:00
resource_manager.py [nvbug 5325284][fix] Increase Nemotron-H warmup request robustness (#4954) 2025-06-10 11:09:37 +03:00
sampler.py feat: port MakeDecodingBatchInputOutput to python in TRTLLMSampler (#4828) 2025-06-10 07:28:34 +08:00
scheduler.py fix: max_num_sequences calculation with overlap scheduling (#4532) 2025-06-03 09:31:22 +02:00
seq_slot_manager.py fix: skip add new slot if request has slot 0 (#3991) 2025-05-06 07:46:39 +02:00