TensorRT-LLMs/cpp/tensorrt_llm
Perkz Zheng 92397476d3
[https://nvbugspro.nvidia.com/bug/5415268] fix illegal smem access with chunked attention (#6401)
Signed-off-by: Perkz Zheng <67892460+PerkzZheng@users.noreply.github.com>
Co-authored-by: Sharan Chetlur <116769508+schetlur-nv@users.noreply.github.com>
2025-07-30 11:33:22 +08:00
..
batch_manager cherry-pick: [fix: nvbugs/5355493] Correctly clamp max sequence len to max attention window (#5874) 2025-07-09 19:11:17 +02:00
common [NVBUG:5355009] Modify check for fuse_fp4_quant on SM120 (#5651) 2025-07-03 22:08:15 +09:00
cutlass_extensions/include/cutlass_extensions refactoring: port customized kernels with public cutlass version (#5027) 2025-06-13 16:19:31 +08:00
executor Fix: missing clientId when serialize and deserialize response (cherry-pick #5231) (#5378) 2025-06-24 10:00:37 +08:00
executor_worker Update TensorRT-LLM (#2792) 2025-02-18 21:27:39 +08:00
kernels [https://nvbugspro.nvidia.com/bug/5415268] fix illegal smem access with chunked attention (#6401) 2025-07-30 11:33:22 +08:00
layers Feat: Variable-Beam-Width-Search (VBWS) part4 (#3979) 2025-05-12 22:32:29 +02:00
plugins [5321981] fix: Fix the Llama3.1 405B hanging issue. (#5698) 2025-07-04 12:29:19 +08:00
pybind feat: Add LLGuidance Support for PyTorch Backend (#5214) 2025-06-18 19:33:34 +08:00
runtime refactor: remove decoder request from decoder interface (#5129) 2025-06-16 09:12:30 +02:00
testing refactor: Move ModelSpec to core library (#3980) 2025-05-04 01:39:09 +08:00
thop [nvbugs/5326453] Avoid nesting NCCL grouping in allgather OP (#5789) 2025-07-08 15:39:27 +09:00
CMakeLists.txt refactoring: port customized kernels with public cutlass version (#5027) 2025-06-13 16:19:31 +08:00