| .. |
|
__init__.py
|
feat: Basic skeleton for Gemma3 VLM (#5108)
|
2025-06-13 17:27:04 +08:00 |
|
.gitkeep
|
Update TensorRT-LLM (#2755)
|
2025-02-11 03:01:00 +00:00 |
|
modeling_auto.py
|
[feat] Implement model-agnostic one-engine eagle3 (#4778)
|
2025-06-13 08:11:41 -07:00 |
|
modeling_bert.py
|
feat: Remove not used padding_idx in models (#5385)
|
2025-06-25 17:19:59 +08:00 |
|
modeling_clip.py
|
Cherry pick feat/llama4 to main (#4739)
|
2025-05-30 05:28:40 +08:00 |
|
modeling_deepseekv3.py
|
[feat] Add TRTLLM MoE nvfp4 cubins for mid-high concurrency; attention_dp for TRTLLM MoE (#5723)
|
2025-07-10 14:06:50 +08:00 |
|
modeling_gemma3.py
|
test: Fix Gemma3 unit tests due to transformers upgrade (#5921)
|
2025-07-10 17:24:10 -07:00 |
|
modeling_gemma3vl.py
|
[refactor] Move vision parts from processor to model for Gemma3 (#5888)
|
2025-07-11 15:13:51 -07:00 |
|
modeling_hyperclovax.py
|
feat: add MultimodalParams & putting all multimodal params into it and refactor HyperCLOVAX & Qwen2/2.5-VL (#5522)
|
2025-07-07 18:03:12 -07:00 |
|
modeling_llama_min_latency.py
|
[fix] Fix llama4 min latency (#5117)
|
2025-06-11 15:44:38 +08:00 |
|
modeling_llama.py
|
[TRTLLM-6262] Fix Llama4 Scout FP4 crash issue (#5834)
|
2025-07-09 14:23:21 +08:00 |
|
modeling_llava_next.py
|
Update transformers to 4.53.0 (#5747)
|
2025-07-09 09:32:24 -07:00 |
|
modeling_mistral.py
|
feat(models): Mistral3.1 VLM pytorch backend support (#5529)
|
2025-07-09 13:17:40 -07:00 |
|
modeling_mixtral.py
|
[feat] Support torch compile for attention dp (#5086)
|
2025-07-01 13:48:52 -04:00 |
|
modeling_mllama.py
|
feat : support duplicate_kv_weight for qwen3 blockwise scale (#5459)
|
2025-06-30 11:49:22 +08:00 |
|
modeling_multimodal_encoder.py
|
Update TensorRT-LLM (#2873)
|
2025-03-11 21:13:42 +08:00 |
|
modeling_multimodal_utils.py
|
chore: Change the type annotations of input_ids and position_ids to int32. (#4632)
|
2025-06-07 16:10:47 +08:00 |
|
modeling_nemotron_h.py
|
[TRTLLM-5835][feat] Optimized Mamba2Mixer prefill (#5128)
|
2025-06-16 16:29:17 +03:00 |
|
modeling_nemotron_nas.py
|
feat: Add support for YARN in NemotronNAS models (#4906)
|
2025-06-29 09:45:49 +03:00 |
|
modeling_nemotron.py
|
feat: Remove not used padding_idx in models (#5385)
|
2025-06-25 17:19:59 +08:00 |
|
modeling_pixtral.py
|
feat(models): Mistral3.1 VLM pytorch backend support (#5529)
|
2025-07-09 13:17:40 -07:00 |
|
modeling_qwen2vl.py
|
Update transformers to 4.53.0 (#5747)
|
2025-07-09 09:32:24 -07:00 |
|
modeling_qwen3_moe.py
|
[feat] Add TRTLLM MoE nvfp4 cubins for mid-high concurrency; attention_dp for TRTLLM MoE (#5723)
|
2025-07-10 14:06:50 +08:00 |
|
modeling_qwen3.py
|
feat: Remove not used padding_idx in models (#5385)
|
2025-06-25 17:19:59 +08:00 |
|
modeling_qwen_moe.py
|
[feat] Support torch compile for attention dp (#5086)
|
2025-07-01 13:48:52 -04:00 |
|
modeling_qwen.py
|
feat: Remove not used padding_idx in models (#5385)
|
2025-06-25 17:19:59 +08:00 |
|
modeling_siglip.py
|
Cherry pick feat/llama4 to main (#4739)
|
2025-05-30 05:28:40 +08:00 |
|
modeling_speculative.py
|
[refactor] Simplification of Speculative decoding configs (#5639)
|
2025-07-10 11:37:30 -04:00 |
|
modeling_utils.py
|
[ModelLoad] Concurrent load model (#5291)
|
2025-07-03 22:18:04 +08:00 |
|
modeling_vila.py
|
feat: add MultimodalParams & putting all multimodal params into it and refactor HyperCLOVAX & Qwen2/2.5-VL (#5522)
|
2025-07-07 18:03:12 -07:00 |