TensorRT-LLMs/cpp
DylanChen-NV 5ca2b9bb15
[TRTLLM-5812][feat] support FP8 row-wise dense GEMM in torch flow (#5615)
Signed-off-by: Dylan Chen <191843203+DylanChen-NV@users.noreply.github.com>
2025-07-07 18:04:57 +08:00
..
cmake feat: reduce unnecessary kernel generation (#5476) 2025-07-04 14:37:49 +08:00
include/tensorrt_llm refactor: decoding inputs (#5679) 2025-07-06 08:21:02 +02:00
kernels chore: Improve documentation of Kv_block_array (#5765) 2025-07-05 22:25:27 +02:00
micro_benchmarks feat: Add support for per expert activation scaling factors (#5013) 2025-06-28 09:10:35 +12:00
tensorrt_llm [TRTLLM-5812][feat] support FP8 row-wise dense GEMM in torch flow (#5615) 2025-07-07 18:04:57 +08:00
tests Refactor the topk parallelization part for the routing kernels (#5567) 2025-07-07 15:53:25 +08:00
CMakeLists.txt feat: reduce unnecessary kernel generation (#5476) 2025-07-04 14:37:49 +08:00
conandata.yml infra: add conan (#3744) 2025-04-30 11:53:14 -07:00
conanfile.py feat: large-scale EP(part 6: Online EP load balancer integration for GB200 nvfp4) (#4818) 2025-06-08 10:25:18 +08:00
libnuma_conan.py fix cuda driver link issue with driver version less than 12.3 (#5025) 2025-06-10 15:27:39 +08:00