TensorRT-LLMs/examples/mllama
Mike Iovine 5bdf997963
Add Llama 4 (#3302)
Signed-off-by: Mike Iovine <miovine@nvidia.com>
2025-04-09 03:35:21 +08:00
..
convert_checkpoint.py Update TensorRT-LLM (#2582) 2024-12-16 21:50:47 -08:00
README.md Update TensorRT-LLM (#2582) 2024-12-16 21:50:47 -08:00
requirements.txt Add Llama 4 (#3302) 2025-04-09 03:35:21 +08:00

MLLaMA (llama-3.2 Vision model)

MLLaMA is a multimodal model, and reuse the multimodal modules in examples/multimodal