mirror of
https://github.com/NVIDIA/TensorRT-LLM.git
synced 2026-01-24 20:52:48 +08:00
* Update TensorRT-LLM --------- Co-authored-by: tonylek <137782967+tonylek@users.noreply.github.com> |
||
|---|---|---|
| .. | ||
| convert_checkpoint.py | ||
| README.md | ||
| requirements.txt | ||
MLLaMA
===
Latest text model workflow on Huggingface
- install latest transformers
pip install -U transformers
- build vision encoder model via onnx
python examples/multimodal/build_visual_engine.py --model_type mllama \
--model_path /home/scratch.bhsueh_gpu/Evian3/evian3-from-huggingface/Llama-3.2-11B-Vision/ \
--output_dir /tmp/mllama/trt_engines/encoder/
- build and run decoder model by TRT LLM
python examples/mllama/convert_checkpoint.py --model_dir /home/scratch.bhsueh_gpu/Evian3/evian3-from-huggingface/Llama-3.2-11B-Vision/ \
--output_dir /tmp/mllama/trt_ckpts \
--dtype bfloat16
python3 -m tensorrt_llm.commands.build \
--checkpoint_dir /tmp/mllama/trt_ckpts \
--output_dir /tmp/mllama/trt_engines/decoder/ \
--max_num_tokens 4096 \
--max_seq_len 2048 \
--workers 1 \
--gemm_plugin auto \
--max_batch_size 4 \
--max_encoder_input_len 4100 \
--input_timing_cache model.cache
wget https://huggingface.co/datasets/huggingface/documentation-images/resolve/0052a70beed5bf71b92610a43a52df6d286cd5f3/diffusers/rabbit.jpg
# Run test on multimodal/run.py with C++ runtime
python3 examples/multimodal/run.py --visual_engine_dir /tmp/mllama/trt_engines/encoder/ \
--visual_engine_name visual_encoder.engine \
--llm_engine_dir /tmp/mllama/trt_engines/decoder/ \
--hf_model_dir /home/scratch.bhsueh_gpu/Evian3/evian3-from-huggingface/Llama-3.2-11B-Vision/ \
--image_path ./rabbit.jpg \
--input_text "<|image|><|begin_of_text|>If I had to write a haiku for this one" \
--max_new_tokens 50 \
--batch_size 2
Use model_runner_cpp by default. To switch to model_runner, set `--use_py_session` in the command mentioned above.