TensorRT-LLMs/tensorrt_llm/_torch/pyexecutor
Rohan Varma 3d940e77f0
[TRTLLM-5273]feat/Use full attention mask if Llama3 is used as encoder and fix EarlyStopDecoder unsqueeze bug (#4290)
* add bidirectional support and fix EarlyStopDecoder unsqueeze to be compatible with LogitsStorage

Signed-off-by: Rohan Varma <rohanv@nvidia.com>

* run pre-commit

Signed-off-by: Rohan Varma <rohanv@nvidia.com>

* instead of bidirectional flag use ModelConfig.is_generation

Signed-off-by: Rohan Varma <rohanv@nvidia.com>

* fix unit test to extract logits from correct dim

Signed-off-by: Rohan Varma <rohanv@nvidia.com>

---------

Signed-off-by: Rohan Varma <rohanv@nvidia.com>
2025-05-20 10:15:36 -07:00
..
__init__.py Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
_util.py feat: Add pp support for hybrid attn/mamba model (#4358) 2025-05-19 14:47:45 +08:00
config_utils.py feat: support kv cache reuse for MLA (#3571) 2025-05-15 15:22:21 +08:00
config.py API Breaking Change + Readability: "decoder"->"sampler" (#4121) 2025-05-16 23:52:25 +08:00
cuda_graph_runner.py feat: [nvbugs/5261055][nvbugs/5170160] non-invasive pipeline parallelism (#4034) 2025-05-16 04:16:53 +08:00
guided_decoder.py feat: Support the Structural Tag in guided decoding (#4066) 2025-05-12 17:24:50 +08:00
kv_cache_transceiver.py cacheTransceiver buffer manager (#3798) 2025-04-27 11:48:15 +08:00
layerwise_nvtx_marker.py Update TensorRT-LLM (#2849) 2025-03-04 18:44:00 +08:00
llm_request.py chore: Mass Integration 0.19 (#4255) 2025-05-16 10:53:25 +02:00
model_engine.py [https://nvbugs/5123103][fix] Fix torch compile for DeepSeekV3 (#3952) 2025-05-19 22:12:25 +08:00
py_executor_creator.py add changes for fp8, nemotron-nas, API (#4180) 2025-05-18 23:27:25 +08:00
py_executor.py API Breaking Change + Readability: "decoder"->"sampler" (#4121) 2025-05-16 23:52:25 +08:00
resource_manager.py feat: Add pp support for hybrid attn/mamba model (#4358) 2025-05-19 14:47:45 +08:00
sampler.py [TRTLLM-5273]feat/Use full attention mask if Llama3 is used as encoder and fix EarlyStopDecoder unsqueeze bug (#4290) 2025-05-20 10:15:36 -07:00
scheduler.py API Breaking Change + Readability: "decoder"->"sampler" (#4121) 2025-05-16 23:52:25 +08:00
seq_slot_manager.py fix: skip add new slot if request has slot 0 (#3991) 2025-05-06 07:46:39 +02:00