TensorRT-LLMs/cpp
Matthias Jouanneaux 1be7faef37
[TRTLLM-5966][feat] Helix: add custom position ids to MLA kernels (#6904)
Signed-off-by: Matthias Jouanneaux <mjoux@nvidia.com>
Co-authored-by: brb-nv <169953907+brb-nv@users.noreply.github.com>
2025-09-19 20:55:32 +08:00
..
cmake [TRTLLM-4629] [feat] Add support of CUDA13 and sm103 devices (#7568) 2025-09-16 09:56:18 +08:00
include/tensorrt_llm [TRTLLM-8044][refactor] Rename data -> cache for cacheTransceiver (#7659) 2025-09-16 08:43:56 -04:00
kernels [TRTLLM-6577][feat] Support nano_v2_vlm in pytorch backend (#7207) 2025-09-18 16:26:20 +08:00
micro_benchmarks [None][perf] Add MOE support for dynamic cluster shapes and custom epilogue schedules (#6126) 2025-09-02 21:54:43 -04:00
tensorrt_llm [TRTLLM-5966][feat] Helix: add custom position ids to MLA kernels (#6904) 2025-09-19 20:55:32 +08:00
tests [TRTLLM-8044][refactor] Rename data -> cache for cacheTransceiver (#7659) 2025-09-16 08:43:56 -04:00
CMakeLists.txt [TRTLLM-4629] [feat] Add support of CUDA13 and sm103 devices (#7568) 2025-09-16 09:56:18 +08:00
conandata.yml infra: add conan (#3744) 2025-04-30 11:53:14 -07:00
conanfile.py feat: large-scale EP(part 6: Online EP load balancer integration for GB200 nvfp4) (#4818) 2025-06-08 10:25:18 +08:00
libnuma_conan.py fix cuda driver link issue with driver version less than 12.3 (#5025) 2025-06-10 15:27:39 +08:00