TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM also contains components to create Python and C++ runtimes that execute those TensorRT engines.
Go to file
2024-05-20 18:05:14 +08:00
.github Update TensorRT-LLM (#1598) 2024-05-14 16:43:41 +08:00
3rdparty Update TensorRT-LLM (#1492) 2024-04-24 14:44:22 +08:00
benchmarks Update TensorRT-LLM (#1598) 2024-05-14 16:43:41 +08:00
cpp Update customAllReduceKernels.cu (#1558) 2024-05-20 17:56:57 +08:00
docker Update TensorRT-LLM (#1554) 2024-05-07 23:34:28 +08:00
docs Update dead links in perf-best-practices.md (#1545) 2024-05-20 18:05:14 +08:00
examples Update TensorRT-LLM (#1598) 2024-05-14 16:43:41 +08:00
scripts Update TensorRT-LLM (#1598) 2024-05-14 16:43:41 +08:00
tensorrt_llm Update TensorRT-LLM (#1598) 2024-05-14 16:43:41 +08:00
tests Update TensorRT-LLM (#1598) 2024-05-14 16:43:41 +08:00
windows Update TensorRT-LLM (#1598) 2024-05-14 16:43:41 +08:00
.clang-format Update TensorRT-LLM (#1274) 2024-03-12 18:15:52 +08:00
.dockerignore Update TensorRT-LLM (#941) 2024-01-23 23:22:35 +08:00
.gitattributes Update TensorRT-LLM (#1554) 2024-05-07 23:34:28 +08:00
.gitignore Update TensorRT-LLM (#1598) 2024-05-14 16:43:41 +08:00
.gitmodules Update TensorRT-LLM (#524) 2023-12-01 22:27:51 +08:00
.pre-commit-config.yaml Update TensorRT-LLM (20240116) (#891) 2024-01-16 20:03:11 +08:00
CHANGELOG.md Update TensorRT-LLM (#1455) 2024-04-16 19:40:08 +08:00
LICENSE Initial commit 2023-09-20 00:29:41 -07:00
README.md Update TensorRT-LLM (#1554) 2024-05-07 23:34:28 +08:00
requirements-dev-windows.txt Update TensorRT-LLM (#1427) 2024-04-09 17:03:34 +08:00
requirements-dev.txt Update TensorRT-LLM (#1427) 2024-04-09 17:03:34 +08:00
requirements-windows.txt Update TensorRT-LLM (#1598) 2024-05-14 16:43:41 +08:00
requirements.txt Update TensorRT-LLM (#1598) 2024-05-14 16:43:41 +08:00
setup.cfg Initial commit 2023-09-20 00:29:41 -07:00
setup.py Update TensorRT-LLM (#1598) 2024-05-14 16:43:41 +08:00

TensorRT-LLM

A TensorRT Toolbox for Optimized Large Language Model Inference

Documentation python cuda trt version license

Architecture   |   Results   |   Examples   |   Documentation


Latest News

TensorRT-LLM Overview

TensorRT-LLM is an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM contains components to create Python and C++ runtimes that execute those TensorRT engines. It also includes a backend for integration with the NVIDIA Triton Inference Server; a production-quality system to serve LLMs. Models built with TensorRT-LLM can be executed on a wide range of configurations going from a single GPU to multiple nodes with multiple GPUs (using Tensor Parallelism and/or Pipeline Parallelism).

The TensorRT-LLM Python API architecture looks similar to the PyTorch API. It provides a functional module containing functions like einsum, softmax, matmul or view. The layers module bundles useful building blocks to assemble LLMs; like an Attention block, a MLP or the entire Transformer layer. Model-specific components, like GPTAttention or BertAttention, can be found in the models module.

TensorRT-LLM comes with several popular models pre-defined. They can easily be modified and extended to fit custom needs. Refer to the Support Matrix for a list of supported models.

To maximize performance and reduce memory footprint, TensorRT-LLM allows the models to be executed using different quantization modes (refer to support matrix). TensorRT-LLM supports INT4 or INT8 weights (and FP16 activations; a.k.a. INT4/INT8 weight-only) as well as a complete implementation of the SmoothQuant technique.

Getting Started

To get started with TensorRT-LLM, visit our documentation: