TensorRT-LLMs/cpp
Vincent Huang 8505c3ad88
Optimize gemm perf for gpt-oss in Spark
1. support bias epilog for cublaslt backend
2. replace torch native mm with trtllm mm for sm121 gpt-oss
3. add lookup table for the gpt-oss gemm for sm121

Signed-off-by: Vincent Huang <vincenth@nvidia.com>
Signed-off-by: list <58580514+farazkh80@users.noreply.github.com>
2025-09-30 17:25:58 +00:00
..
cmake feat: reduce unnecessary kernel generation (#5476) 2025-07-04 14:37:49 +08:00
include/tensorrt_llm Optimize gemm perf for gpt-oss in Spark 2025-09-30 17:25:58 +00:00
kernels [fmha] fixes: fp8 kernels, assert removal, ampere-style sinks 2025-09-25 04:58:54 +00:00
micro_benchmarks [TRTLLM-7319][perf] Fuse slicing into MoE. (#6728) 2025-08-25 16:52:30 -04:00
tensorrt_llm Optimize gemm perf for gpt-oss in Spark 2025-09-30 17:25:58 +00:00
tests pre-commit 2025-09-25 05:02:17 +00:00
CMakeLists.txt [None][feat] Enable nanobind as the default binding library (#6608) 2025-08-22 09:48:41 +02:00
conandata.yml infra: add conan (#3744) 2025-04-30 11:53:14 -07:00
conanfile.py feat: large-scale EP(part 6: Online EP load balancer integration for GB200 nvfp4) (#4818) 2025-06-08 10:25:18 +08:00
libnuma_conan.py fix cuda driver link issue with driver version less than 12.3 (#5025) 2025-06-10 15:27:39 +08:00