TensorRT-LLMs/cpp
Anish Shanbhag 15de45d782
[TRTLLM-8682][chore] Remove auto_parallel module (#8329)
Signed-off-by: Anish Shanbhag <ashanbhag@nvidia.com>
2025-10-22 20:53:08 -04:00
..
cmake [None][doc] Rename TensorRT-LLM to TensorRT LLM for homepage and the … (#7850) 2025-09-25 21:02:35 +08:00
include/tensorrt_llm [https://nvbugs/5429636][feat] Kv transfer timeout (#8459) 2025-10-22 09:29:02 -04:00
kernels [None][feat] Add vLLM KV Pool support for XQA mla kernel (#8560) 2025-10-22 14:12:57 +08:00
micro_benchmarks [TRTLLM-6286] [perf] Add NoSmem epilogue schedule and dynamic cluster shape for sm10x group gemm (#7757) 2025-09-21 11:38:17 +08:00
tensorrt_llm [None][fix] Fix EPLB CPU thread NUMA binding (#8579) 2025-10-22 10:52:09 -04:00
tests [TRTLLM-8682][chore] Remove auto_parallel module (#8329) 2025-10-22 20:53:08 -04:00
CMakeLists.txt [None][doc] Rename TensorRT-LLM to TensorRT LLM for homepage and the … (#7850) 2025-09-25 21:02:35 +08:00
conandata.yml infra: add conan (#3744) 2025-04-30 11:53:14 -07:00
conanfile.py feat: large-scale EP(part 6: Online EP load balancer integration for GB200 nvfp4) (#4818) 2025-06-08 10:25:18 +08:00
libnuma_conan.py fix cuda driver link issue with driver version less than 12.3 (#5025) 2025-06-10 15:27:39 +08:00