TensorRT-LLMs/cpp
Tracin 6c91f1c7ac
Mxfp8xmxfp4 quant mode(#4978)
Signed-off-by: Tracin <10434017+Tracin@users.noreply.github.com>
Co-authored-by: QI JUN <22017000+QiJune@users.noreply.github.com>
2025-06-10 22:01:37 +08:00
..
cmake feat: NIXL interface integration (#3934) 2025-05-19 18:18:22 +08:00
include/tensorrt_llm Mxfp8xmxfp4 quant mode(#4978) 2025-06-10 22:01:37 +08:00
kernels [feat] Support XQA-based MLA on SM120 (#4858) 2025-06-06 22:32:49 +08:00
micro_benchmarks feat: Add Mixture of Experts FP8xMXFP4 support (#4750) 2025-06-09 13:25:04 +08:00
tensorrt_llm Mxfp8xmxfp4 quant mode(#4978) 2025-06-10 22:01:37 +08:00
tests Mxfp8xmxfp4 quant mode(#4978) 2025-06-10 22:01:37 +08:00
CMakeLists.txt fix: better method to help torch find nvtx3 (#4110) 2025-05-15 16:42:30 +08:00
conandata.yml infra: add conan (#3744) 2025-04-30 11:53:14 -07:00
conanfile.py feat: large-scale EP(part 6: Online EP load balancer integration for GB200 nvfp4) (#4818) 2025-06-08 10:25:18 +08:00
libnuma_conan.py fix cuda driver link issue with driver version less than 12.3 (#5025) 2025-06-10 15:27:39 +08:00