TensorRT-LLMs/cpp
WeiHaocheng 42a9385d02
[TRTLLM-5331] perf: Replace allgaher with AllToAllPrepare (#5570)
Signed-off-by: Fred Wei <20514172+WeiHaocheng@users.noreply.github.com>
2025-06-30 13:06:09 +08:00
..
cmake feat: NIXL interface integration (#3934) 2025-05-19 18:18:22 +08:00
include/tensorrt_llm refactor: Speculative decoding buffers part 2 (#5316) 2025-06-27 17:41:48 +02:00
kernels [perf] improve XQA-MLA perf (#5468) 2025-06-26 18:09:13 +08:00
micro_benchmarks feat: Add support for per expert activation scaling factors (#5013) 2025-06-28 09:10:35 +12:00
tensorrt_llm [TRTLLM-5331] perf: Replace allgaher with AllToAllPrepare (#5570) 2025-06-30 13:06:09 +08:00
tests [feat] Optimizations on weight-only batched gemv kernel (#5420) 2025-06-30 10:20:16 +08:00
CMakeLists.txt Fix execute_process: check results using EQUAL (#5481) 2025-06-27 11:57:04 +08:00
conandata.yml infra: add conan (#3744) 2025-04-30 11:53:14 -07:00
conanfile.py feat: large-scale EP(part 6: Online EP load balancer integration for GB200 nvfp4) (#4818) 2025-06-08 10:25:18 +08:00
libnuma_conan.py fix cuda driver link issue with driver version less than 12.3 (#5025) 2025-06-10 15:27:39 +08:00