TensorRT-LLMs/examples/models/contrib/grok
rakib-hasan ff3b741045
feat: adding multimodal (only image for now) support in trtllm-bench (#3490)
* feat: adding multimodal (only image for now) support in trtllm-bench

Signed-off-by: Rakib Hasan <rhasan@nvidia.com>

* fix: add  in load_dataset() calls to maintain the v2.19.2 behavior

Signed-off-by: Rakib Hasan <rhasan@nvidia.com>

* re-adding prompt_token_ids and using that for prompt_len

Signed-off-by: Rakib Hasan <rhasan@nvidia.com>

* updating the datasets version in examples as well

Signed-off-by: Rakib Hasan <rhasan@nvidia.com>

* api changes are not needed

Signed-off-by: Rakib Hasan <rhasan@nvidia.com>

* moving datasets requirement and removing a missed api change

Signed-off-by: Rakib Hasan <rhasan@nvidia.com>

* addressing review comments

Signed-off-by: Rakib Hasan <rhasan@nvidia.com>

* refactoring the quickstart example

Signed-off-by: Rakib Hasan <rhasan@nvidia.com>

---------

Signed-off-by: Rakib Hasan <rhasan@nvidia.com>
2025-04-18 07:06:16 +08:00
..
convert_checkpoint.py chore: clean some ci of qa test (#3083) 2025-03-31 14:30:41 +08:00
README.md chore: clean some ci of qa test (#3083) 2025-03-31 14:30:41 +08:00
requirements.txt feat: adding multimodal (only image for now) support in trtllm-bench (#3490) 2025-04-18 07:06:16 +08:00

Grok-1

This document shows how to build and run grok-1 model in TensorRT-LLM on both single GPU, single node multi-GPU and multi-node multi-GPU.

Prerequisite

First of all, please clone the official grok-1 code repo with below commands and install the dependencies.

git clone https://github.com/xai-org/grok-1.git /path/to/folder

And then downloading the weights per instructions.

Hardware

The grok-1 model requires a node with 8x80GB GPU memory(at least).

Overview

The TensorRT-LLM Grok-1 implementation can be found in tensorrt_llm/models/grok/model.py. The TensorRT-LLM Grok-1 example code is located in examples/grok. There is one main file:

In addition, there are two shared files in the parent folder examples for inference and evaluation:

Support Matrix

  • INT8 Weight-Only
  • Tensor Parallel
  • STRONGLY TYPED

Usage

The TensorRT-LLM Grok-1 example code locates at examples/grok. It takes xai weights as input, and builds the corresponding TensorRT engines. The number of TensorRT engines depends on the number of GPUs used to run inference.

Build TensorRT engine(s)

Please install required packages first to make sure the example uses matched tensorrt_llm version:

pip install -r requirements.txt

Need to prepare the Grok-1 checkpoint by following the guides here https://github.com/xai-org/grok-1.

TensorRT-LLM Grok-1 builds TensorRT engine(s) from Xai's checkpoints.

Normally trtllm-build only requires single GPU, but if you've already got all the GPUs needed for inference, you could enable parallel building to make the engine building process faster by adding --workers argument. Please note that currently workers feature only supports single node.

Below is the step-by-step to run Grok-1 with TensorRT LLM.

# Build the bfloat16 engine from xai official weights.
python convert_checkpoint.py --model_dir ./tmp/grok-1/ \
                              --output_dir ./tllm_checkpoint_8gpus_bf16 \
                              --dtype bfloat16 \
                              --use_weight_only \
                              --tp_size 8 \
                              --workers 8

trtllm-build --checkpoint_dir ./tllm_checkpoint_8gpus_bf16 \
            --output_dir ./tmp/grok-1/trt_engines/bf16/8-gpus \
            --gpt_attention_plugin bfloat16 \
            --gemm_plugin bfloat16 \
            --moe_plugin bfloat16 \
            --paged_kv_cache enable \
            --remove_input_padding enable \
            --workers 8


# Run Grok-1 with 8 GPUs
mpirun -n 8 --allow-run-as-root \
    python ../../../run.py \
    --input_text "The answer to life the universe and everything is of course" \
    --engine_dir ./tmp/grok-1/trt_engines/bf16/8-gpus \
    --max_output_len 50 --top_p 1 --top_k 8 --temperature 0.3 \
    --vocab_file  ./tmp/grok-1/tokenizer.model