TensorRT-LLMs/cpp
Perkz Zheng 92397476d3
[https://nvbugspro.nvidia.com/bug/5415268] fix illegal smem access with chunked attention (#6401)
Signed-off-by: Perkz Zheng <67892460+PerkzZheng@users.noreply.github.com>
Co-authored-by: Sharan Chetlur <116769508+schetlur-nv@users.noreply.github.com>
2025-07-30 11:33:22 +08:00
..
cmake feat: NIXL interface integration (#3934) 2025-05-19 18:18:22 +08:00
include/tensorrt_llm feat: Add LLGuidance Support for PyTorch Backend (#5214) 2025-06-18 19:33:34 +08:00
kernels [https://nvbugspro.nvidia.com/bug/5355054] fallback to cubins for fp8 fmha kernels on Ada. (#5779) 2025-07-08 10:35:38 +08:00
micro_benchmarks [TRTLLM-5330] perf: Optimize MoE supplementary kernels for large-scale EP (#5215) 2025-06-17 15:23:24 +08:00
tensorrt_llm [https://nvbugspro.nvidia.com/bug/5415268] fix illegal smem access with chunked attention (#6401) 2025-07-30 11:33:22 +08:00
tests cherry-pick: [fix: nvbugs/5355493] Correctly clamp max sequence len to max attention window (#5874) 2025-07-09 19:11:17 +02:00
CMakeLists.txt [Infra] - Update dependencies with NGC PyTorch 25.05 and TRT 10.11 (#4885) 2025-06-17 23:48:34 +08:00
conandata.yml infra: add conan (#3744) 2025-04-30 11:53:14 -07:00
conanfile.py feat: large-scale EP(part 6: Online EP load balancer integration for GB200 nvfp4) (#4818) 2025-06-08 10:25:18 +08:00
libnuma_conan.py fix cuda driver link issue with driver version less than 12.3 (#5025) 2025-06-10 15:27:39 +08:00