TensorRT-LLMs/cpp
WeiHaocheng dccbfc8b1e
fix: Set init value for moe expert id (#5660)
Signed-off-by: Fred Wei <20514172+WeiHaocheng@users.noreply.github.com>
2025-07-03 07:05:31 -04:00
..
cmake feat: NIXL interface integration (#3934) 2025-05-19 18:18:22 +08:00
include/tensorrt_llm refactor: Clean up DecodingInput and DecodingOutput (#5617) 2025-07-01 14:31:42 +02:00
kernels [#5403][perf] Conditionally enable SWAP AB for speculative decoding (#5404) 2025-07-01 18:32:37 +08:00
micro_benchmarks feat: Add support for per expert activation scaling factors (#5013) 2025-06-28 09:10:35 +12:00
tensorrt_llm fix: Set init value for moe expert id (#5660) 2025-07-03 07:05:31 -04:00
tests [TRTLLM-1316] refactor: Remove unnecessary pipeline parallelism logic from postProcessRequest (#5489) 2025-07-02 10:13:31 +02:00
CMakeLists.txt Fix execute_process: check results using EQUAL (#5481) 2025-06-27 11:57:04 +08:00
conandata.yml infra: add conan (#3744) 2025-04-30 11:53:14 -07:00
conanfile.py feat: large-scale EP(part 6: Online EP load balancer integration for GB200 nvfp4) (#4818) 2025-06-08 10:25:18 +08:00
libnuma_conan.py fix cuda driver link issue with driver version less than 12.3 (#5025) 2025-06-10 15:27:39 +08:00