TensorRT-LLMs/examples/models/core/mllama
Kaiyu Xie dfbcb543ce
doc: fix path after examples migration (#3814)
Signed-off-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com>
2025-04-24 02:36:45 +08:00
..
convert_checkpoint.py
README.md
requirements.txt

MLLaMA (llama-3.2 Vision model)

MLLaMA is a multimodal model, and reuse the multimodal modules in examples/models/core/multimodal