TensorRT-LLMs/cpp
2025-09-19 09:40:49 +08:00
..
cmake [TRTLLM-4629] [feat] Add support of CUDA13 and sm103 devices (#7568) 2025-09-16 09:56:18 +08:00
include/tensorrt_llm [TRTLLM-8044][refactor] Rename data -> cache for cacheTransceiver (#7659) 2025-09-16 08:43:56 -04:00
kernels [TRTLLM-6577][feat] Support nano_v2_vlm in pytorch backend (#7207) 2025-09-18 16:26:20 +08:00
micro_benchmarks [None][perf] Add MOE support for dynamic cluster shapes and custom epilogue schedules (#6126) 2025-09-02 21:54:43 -04:00
tensorrt_llm [TRTLLM-6994][feat] FP8 Context MLA integration (Cherry-pick https://github.com/NVIDIA/TensorRT-LLM/pull/6059 from release/1.1.0rc2) (#7610) 2025-09-19 09:40:49 +08:00
tests [TRTLLM-8044][refactor] Rename data -> cache for cacheTransceiver (#7659) 2025-09-16 08:43:56 -04:00
CMakeLists.txt [TRTLLM-4629] [feat] Add support of CUDA13 and sm103 devices (#7568) 2025-09-16 09:56:18 +08:00
conandata.yml infra: add conan (#3744) 2025-04-30 11:53:14 -07:00
conanfile.py feat: large-scale EP(part 6: Online EP load balancer integration for GB200 nvfp4) (#4818) 2025-06-08 10:25:18 +08:00
libnuma_conan.py fix cuda driver link issue with driver version less than 12.3 (#5025) 2025-06-10 15:27:39 +08:00