TensorRT-LLMs/cpp
Bo Li a80d2373a3
fix: [https://nvbugspro.nvidia.com/bug/5243482] If FlashMLA is used, the existence of FMHA based MLA kernels should not be checked. (#3862)
* Add mIsGenerationMLA to differentiate ctx and gen MLA in AttentionOp.
For Generation MLA, if FlashMLA is used, do not check the existence of FMHA based MLA kernel.

Signed-off-by: Bo Li <bobboli0202@gmail.com>

* Run pre-commit.

Signed-off-by: Bo Li <bobboli0202@gmail.com>

* Fix compile error.

Signed-off-by: Bo Li <bobboli0202@gmail.com>

---------

Signed-off-by: Bo Li <bobboli0202@gmail.com>
2025-04-30 14:27:38 +08:00
..
cmake refactor: Clean up CMakeLists.txt (#3479) 2025-04-18 14:39:29 +08:00
include/tensorrt_llm cacheTransceiver buffer manager (#3798) 2025-04-27 11:48:15 +08:00
micro_benchmarks feat: Add FP8 support for SM 120 (#3248) 2025-04-14 16:05:41 -07:00
tensorrt_llm fix: [https://nvbugspro.nvidia.com/bug/5243482] If FlashMLA is used, the existence of FMHA based MLA kernels should not be checked. (#3862) 2025-04-30 14:27:38 +08:00
tests cacheTransceiver buffer manager (#3798) 2025-04-27 11:48:15 +08:00
CMakeLists.txt refactor: Clean up CMakeLists.txt (#3479) 2025-04-18 14:39:29 +08:00