TensorRT-LLMs/cpp
Bo Deng ec3ebae43e
[TRTLLM-6471] Infra: Upgrade NIXL to 0.3.1 (#5991)
Signed-off-by: Rabia Loulou <174243936+rabial-nv@users.noreply.github.com>
Signed-off-by: Shixiaowei02 <39303645+Shixiaowei02@users.noreply.github.com>
Signed-off-by: Bo Deng <deemod@nvidia.com>
Co-authored-by: Rabia Loulou <174243936+rabial-nv@users.noreply.github.com>
Co-authored-by: Shixiaowei02 <39303645+Shixiaowei02@users.noreply.github.com>
2025-07-16 13:54:42 +08:00
..
cmake feat: reduce unnecessary kernel generation (#5476) 2025-07-04 14:37:49 +08:00
include/tensorrt_llm refactor: Remove enforced sorted order of batch slots (#3502) 2025-07-14 17:23:02 +02:00
kernels [https://nvbugspro.nvidia.com/bug/5355054] fallback to cubins for fp8 fmha kernels on Ada. (#5779) 2025-07-14 17:17:30 +08:00
micro_benchmarks fix: Fix MOE benchmark to rotate buffers to prevent L2 cache reuse (#4135) 2025-07-15 13:40:42 +12:00
tensorrt_llm feat: use session abstraction in data transceiver and cache formatter (#5611) 2025-07-16 13:52:44 +08:00
tests [TRTLLM-6471] Infra: Upgrade NIXL to 0.3.1 (#5991) 2025-07-16 13:54:42 +08:00
CMakeLists.txt feat: binding type build argument (pybind, nanobind) (#5802) 2025-07-11 00:48:50 +09:00
conandata.yml infra: add conan (#3744) 2025-04-30 11:53:14 -07:00
conanfile.py feat: large-scale EP(part 6: Online EP load balancer integration for GB200 nvfp4) (#4818) 2025-06-08 10:25:18 +08:00
libnuma_conan.py fix cuda driver link issue with driver version less than 12.3 (#5025) 2025-06-10 15:27:39 +08:00