TensorRT-LLMs/tensorrt_llm/_torch/pyexecutor
Mike Iovine 9afe510367
[fix] Fix llama4 + eagle3 (#3998)
Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com>
2025-05-08 19:20:27 -04:00
..
__init__.py Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
_util.py feat: support multi lora adapters and TP (#3885) 2025-05-08 23:45:45 +08:00
config.py fix: Fix C++ decoder synchronization in PyTorch (#3106) 2025-04-23 23:55:27 +08:00
cuda_graph_runner.py Support CUDA graphs for EAGLE3 (#3176) 2025-04-17 04:53:50 +08:00
decoder.py feat: adopt new logprob definition in PyTorch flow (#4057) 2025-05-08 20:16:40 +08:00
guided_decoder.py fix: Fix C++ decoder synchronization in PyTorch (#3106) 2025-04-23 23:55:27 +08:00
kv_cache_transceiver.py cacheTransceiver buffer manager (#3798) 2025-04-27 11:48:15 +08:00
layerwise_nvtx_marker.py Update TensorRT-LLM (#2849) 2025-03-04 18:44:00 +08:00
llm_request.py feat: adopt new logprob definition in PyTorch flow (#4057) 2025-05-08 20:16:40 +08:00
model_engine.py [feat/] enable attention DP in Llama4 maverick model - part 1 (#4065) 2025-05-08 05:06:40 +08:00
py_executor_creator.py [fix] Fix llama4 + eagle3 (#3998) 2025-05-08 19:20:27 -04:00
py_executor.py feat: support to trace executor loop. (#3983) 2025-05-05 10:26:33 +08:00
resource_manager.py feat: support multi lora adapters and TP (#3885) 2025-05-08 23:45:45 +08:00
scheduler.py refactor: collect executor and decoder states into dataclass (#3234) 2025-04-15 16:31:45 +08:00
seq_slot_manager.py fix: skip add new slot if request has slot 0 (#3991) 2025-05-06 07:46:39 +02:00