TensorRT-LLMs/tensorrt_llm/_torch/attention_backend
dongjiyingdjy 22ff81b047
fix:fix illeagel memory access when mtp >= 2 (#3006)
* fix - fix illeagel memory access when mtp > 2

---------

Signed-off-by: Jiying Dong <87510204+dongjiyingdjy@users.noreply.github.com>
Signed-off-by: Fanrong Li <23290157+lfr-0531@users.noreply.github.com>
Co-authored-by: Fanrong Li <23290157+lfr-0531@users.noreply.github.com>
2025-04-01 13:36:45 +08:00
..
__init__.py Update (#2978) 2025-03-23 16:39:35 +08:00
flashinfer.py Update TensorRT-LLM (#2936) 2025-03-18 21:25:19 +08:00
interface.py fix:fix illeagel memory access when mtp >= 2 (#3006) 2025-04-01 13:36:45 +08:00
star_flashinfer.py Refactor imports inside tensorrt_llm._torch. (#3015) 2025-03-26 11:01:07 +08:00
trtllm.py fix:fix illeagel memory access when mtp >= 2 (#3006) 2025-04-01 13:36:45 +08:00
utils.py Update TensorRT-LLM (#2936) 2025-03-18 21:25:19 +08:00
vanilla.py Update TensorRT-LLM (#2936) 2025-03-18 21:25:19 +08:00