TensorRT-LLMs/examples/models/core/mllama
..
convert_checkpoint.py
README.md doc: fix path after examples migration (#3814) 2025-04-24 02:36:45 +08:00
requirements.txt

MLLaMA (llama-3.2 Vision model)

MLLaMA is a multimodal model, and reuse the multimodal modules in examples/models/core/multimodal