TensorRT-LLMs/cpp
Yuening Li 1f8ae2b2db
[TRTLLM-5863][feat] Support MoE INT8 Weight-Only-Quantization in PyTorch Workflow (#6629)
Signed-off-by: Yuening Li <62227368+yueningl@users.noreply.github.com>
2025-08-15 17:15:49 -04:00
..
cmake feat: reduce unnecessary kernel generation (#5476) 2025-07-04 14:37:49 +08:00
include/tensorrt_llm [None][fix] Fix responsibility boundary between the assert and tllmException files (#6723) 2025-08-15 10:34:49 +08:00
kernels [None][feat] Add support for Hopper MLA chunked prefill (#6655) 2025-08-14 10:39:26 +08:00
micro_benchmarks [TRTLLM-6744][feat] Remove input_sf swizzle for module WideEPMoE (#6231) 2025-08-08 11:13:42 +08:00
tensorrt_llm [TRTLLM-5863][feat] Support MoE INT8 Weight-Only-Quantization in PyTorch Workflow (#6629) 2025-08-15 17:15:49 -04:00
tests [https://nvbugs/5415862][fix] Update cublas as 12.9.1 and cuda memory alignment as 256 (#6501) 2025-08-15 11:10:59 +08:00
CMakeLists.txt [TRTLLM-7141][infra] Use repo mirrors to avoid intermittent network failures (#6836) 2025-08-15 11:16:07 +08:00
conandata.yml infra: add conan (#3744) 2025-04-30 11:53:14 -07:00
conanfile.py feat: large-scale EP(part 6: Online EP load balancer integration for GB200 nvfp4) (#4818) 2025-06-08 10:25:18 +08:00
libnuma_conan.py fix cuda driver link issue with driver version less than 12.3 (#5025) 2025-06-10 15:27:39 +08:00