TensorRT-LLMs/cpp/micro_benchmarks
Enwei Zhu 4b82b8b4c7
[TRTLLM-5330] perf: Optimize MoE supplementary kernels for large-scale EP (#5215)
Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com>
2025-06-17 15:23:24 +08:00
..
CMakeLists.txt refactoring: port customized kernels with public cutlass version (#5027) 2025-06-13 16:19:31 +08:00
gen-moe-benchmark-file.py feat: Add Mixture of Experts FP8xMXFP4 support (#4750) 2025-06-09 13:25:04 +08:00
mixtureOfExpertsBackendBenchmarkFixture.h feat: Add Mixture of Experts FP8xMXFP4 support (#4750) 2025-06-09 13:25:04 +08:00
mixtureOfExpertsBackendBenchmarkFixtureOss.h [TRTLLM-5330] perf: Optimize MoE supplementary kernels for large-scale EP (#5215) 2025-06-17 15:23:24 +08:00
mixtureOfExpertsBackendBenchmarkLauncher.cu feat: Add Mixture of Experts FP8xMXFP4 support (#4750) 2025-06-09 13:25:04 +08:00
mixtureOfExpertsBackendBenchmarkLauncherOss.cu refactoring: port customized kernels with public cutlass version (#5027) 2025-06-13 16:19:31 +08:00
README.md Update TensorRT-LLM (#1891) 2024-07-04 14:37:19 +08:00

Micro Benchmarks

This folder contains benchmarks for specific components in TRT-LLM, using google-benchmark

Building

To build add the --micro_benchmark flag to build_wheel.py or pass -DBUILD_MICRO_BENCHMARKS=ON to cmake

Benchmark Documentations

Mixture Of Experts Backend Benchmark

Target mixtureOfExpertsBackendBenchmark

This benchmark covers the backend used by the MixtureOfExperts plugin. It allows you to benchmark different MOE configurations without building a TRT engine.

Usage:

./mixtureOfExpertsBackendBenchmark

# or

./mixtureOfExpertsBackendBenchmark --input_file <JSON benchmark definition>

For more information see:

./mixtureOfExpertsBackendBenchmark --help

The gen-moe-workload-file.py is a helper script that can generate workload files for MOE benchmarks. This is useful for sharing or comparing configurations, such as when generating a reproduction case for a performance bug