TensorRT-LLMs/examples/mllama
Kaiyu Xie 1730a587d8
Update TensorRT-LLM (#2363)
* Update TensorRT-LLM

---------

Co-authored-by: tonylek <137782967+tonylek@users.noreply.github.com>
2024-10-22 20:27:35 +08:00
..
convert_checkpoint.py Update TensorRT-LLM (#2363) 2024-10-22 20:27:35 +08:00
README.md Update TensorRT-LLM (#2363) 2024-10-22 20:27:35 +08:00
requirements.txt Update TensorRT-LLM (#2363) 2024-10-22 20:27:35 +08:00

MLLaMA

===

Latest text model workflow on Huggingface

  • install latest transformers
pip install -U transformers
  • build vision encoder model via onnx
python examples/multimodal/build_visual_engine.py --model_type mllama \
                                                  --model_path /home/scratch.bhsueh_gpu/Evian3/evian3-from-huggingface/Llama-3.2-11B-Vision/ \
                                                  --output_dir /tmp/mllama/trt_engines/encoder/
  • build and run decoder model by TRT LLM
python examples/mllama/convert_checkpoint.py --model_dir /home/scratch.bhsueh_gpu/Evian3/evian3-from-huggingface/Llama-3.2-11B-Vision/ \
                              --output_dir /tmp/mllama/trt_ckpts \
                              --dtype bfloat16

python3 -m tensorrt_llm.commands.build \
            --checkpoint_dir /tmp/mllama/trt_ckpts \
            --output_dir /tmp/mllama/trt_engines/decoder/ \
            --max_num_tokens 4096 \
            --max_seq_len 2048 \
            --workers 1 \
            --gemm_plugin auto \
            --max_batch_size 4 \
            --max_encoder_input_len 4100 \
            --input_timing_cache model.cache

wget https://huggingface.co/datasets/huggingface/documentation-images/resolve/0052a70beed5bf71b92610a43a52df6d286cd5f3/diffusers/rabbit.jpg

# Run test on multimodal/run.py with C++ runtime
python3 examples/multimodal/run.py --visual_engine_dir /tmp/mllama/trt_engines/encoder/ \
                                   --visual_engine_name visual_encoder.engine \
                                   --llm_engine_dir /tmp/mllama/trt_engines/decoder/ \
                                   --hf_model_dir /home/scratch.bhsueh_gpu/Evian3/evian3-from-huggingface/Llama-3.2-11B-Vision/ \
                                   --image_path ./rabbit.jpg \
                                   --input_text "<|image|><|begin_of_text|>If I had to write a haiku for this one" \
                                   --max_new_tokens 50 \
                                   --batch_size 2

Use model_runner_cpp by default. To switch to model_runner, set `--use_py_session` in the command mentioned above.