TensorRT-LLMs/cpp/micro_benchmarks
Kaiyu Xie bca9a33b02
Update TensorRT-LLM (#2008)
* Update TensorRT-LLM

---------

Co-authored-by: Timur Abishev <abishev.timur@gmail.com>
Co-authored-by: MahmoudAshraf97 <hassouna97.ma@gmail.com>
Co-authored-by: Saeyoon Oh <saeyoon.oh@furiosa.ai>
Co-authored-by: hattizai <hattizai@gmail.com>
2024-07-23 23:05:09 +08:00
..
CMakeLists.txt Update TensorRT-LLM (#2008) 2024-07-23 23:05:09 +08:00
gen-moe-benchmark-file.py Update TensorRT-LLM (#2008) 2024-07-23 23:05:09 +08:00
mixtureOfExpertsBackendBenchmarkFixture.h Update TensorRT-LLM (#2008) 2024-07-23 23:05:09 +08:00
mixtureOfExpertsBackendBenchmarkLauncher.cu Update TensorRT-LLM (#2008) 2024-07-23 23:05:09 +08:00
README.md Update TensorRT-LLM (#1891) 2024-07-04 14:37:19 +08:00

Micro Benchmarks

This folder contains benchmarks for specific components in TRT-LLM, using google-benchmark

Building

To build add the --micro_benchmark flag to build_wheel.py or pass -DBUILD_MICRO_BENCHMARKS=ON to cmake

Benchmark Documentations

Mixture Of Experts Backend Benchmark

Target mixtureOfExpertsBackendBenchmark

This benchmark covers the backend used by the MixtureOfExperts plugin. It allows you to benchmark different MOE configurations without building a TRT engine.

Usage:

./mixtureOfExpertsBackendBenchmark

# or

./mixtureOfExpertsBackendBenchmark --input_file <JSON benchmark definition>

For more information see:

./mixtureOfExpertsBackendBenchmark --help

The gen-moe-workload-file.py is a helper script that can generate workload files for MOE benchmarks. This is useful for sharing or comparing configurations, such as when generating a reproduction case for a performance bug