TensorRT-LLMs/cpp
sychen52 5a65af24cd
[OMNIML-2336][feat] Add NVFP4 x FP8 moe kernels (#7821)
Signed-off-by: Shiyang Chen <shiychen@nvidia.com>
2025-09-24 12:14:35 -07:00
..
cmake [TRTLLM-7989][infra] Bundle UCX and NIXL libs in the TRTLLM python package (#7766) 2025-09-22 16:43:35 +08:00
include/tensorrt_llm [TRTLLM-6341][feature] Support SWA KV cache reuse (#6768) 2025-09-24 14:28:24 +08:00
kernels [None][chore] remove cubins for ci cases (#7902) 2025-09-24 14:56:31 +08:00
micro_benchmarks [TRTLLM-6286] [perf] Add NoSmem epilogue schedule and dynamic cluster shape for sm10x group gemm (#7757) 2025-09-21 11:38:17 +08:00
tensorrt_llm [OMNIML-2336][feat] Add NVFP4 x FP8 moe kernels (#7821) 2025-09-24 12:14:35 -07:00
tests [TRTLLM-6341][feature] Support SWA KV cache reuse (#6768) 2025-09-24 14:28:24 +08:00
CMakeLists.txt [TRTLLM-7989][infra] Bundle UCX and NIXL libs in the TRTLLM python package (#7766) 2025-09-22 16:43:35 +08:00
conandata.yml infra: add conan (#3744) 2025-04-30 11:53:14 -07:00
conanfile.py feat: large-scale EP(part 6: Online EP load balancer integration for GB200 nvfp4) (#4818) 2025-06-08 10:25:18 +08:00
libnuma_conan.py fix cuda driver link issue with driver version less than 12.3 (#5025) 2025-06-10 15:27:39 +08:00