| .. |
|
__init__.py
|
feat: Support Gemma3-1b-it in Pytorch workflow (#3999)
|
2025-05-14 14:02:44 +08:00 |
|
.gitkeep
|
Update TensorRT-LLM (#2755)
|
2025-02-11 03:01:00 +00:00 |
|
modeling_auto.py
|
Fix create_weights in attention (#3692)
|
2025-04-24 07:30:00 +08:00 |
|
modeling_bert.py
|
feat: Support cos_sin_cache in all cases. (#3517)
|
2025-04-16 13:48:44 +08:00 |
|
modeling_clip.py
|
Cherry pick feat/llama4 to main (#4739)
|
2025-05-30 05:28:40 +08:00 |
|
modeling_deepseekv3.py
|
fix: EP load balancer with MTP layer and route offset by EP rank (#4767)
|
2025-06-01 00:07:44 +08:00 |
|
modeling_gemma3.py
|
feat: Integration of Fused QKNorm+RoPE. (#4611)
|
2025-05-28 11:20:45 +08:00 |
|
modeling_llama_min_latency.py
|
Cherry pick feat/llama4 to main (#4739)
|
2025-05-30 05:28:40 +08:00 |
|
modeling_llama.py
|
[fix] Fix Llama 3.3 70b EAGLE (#4772)
|
2025-05-30 10:08:08 -04:00 |
|
modeling_llava_next.py
|
refactor: extract and reuse filter_weights. (#4681)
|
2025-05-27 19:48:01 +08:00 |
|
modeling_mamba_hybrid.py
|
feat: Add pp support for hybrid attn/mamba model (#4358)
|
2025-05-19 14:47:45 +08:00 |
|
modeling_mistral.py
|
feat: [nvbugs/5261055][nvbugs/5170160] non-invasive pipeline parallelism (#4034)
|
2025-05-16 04:16:53 +08:00 |
|
modeling_mixtral.py
|
refactor: extract and reuse filter_weights. (#4681)
|
2025-05-27 19:48:01 +08:00 |
|
modeling_mllama.py
|
refactor: extract and reuse filter_weights. (#4681)
|
2025-05-27 19:48:01 +08:00 |
|
modeling_multimodal_encoder.py
|
Update TensorRT-LLM (#2873)
|
2025-03-11 21:13:42 +08:00 |
|
modeling_multimodal_utils.py
|
Adding option to specify a set of token ids for multimodal tokens (#4107)
|
2025-05-07 12:15:41 +08:00 |
|
modeling_nemotron_h.py
|
[TRTLLM-4783][feat] Mamba2 kernel updates for Nemotron-H (#4494)
|
2025-06-01 13:56:44 +03:00 |
|
modeling_nemotron_nas.py
|
add changes for fp8, nemotron-nas, API (#4180)
|
2025-05-18 23:27:25 +08:00 |
|
modeling_nemotron.py
|
feat: Support cos_sin_cache in all cases. (#3517)
|
2025-04-16 13:48:44 +08:00 |
|
modeling_qwen2vl.py
|
feat: enhance trtllm serve multimodal (#3757)
|
2025-05-15 16:16:31 -07:00 |
|
modeling_qwen3_moe.py
|
refactor: extract and reuse filter_weights. (#4681)
|
2025-05-27 19:48:01 +08:00 |
|
modeling_qwen3.py
|
feat: Integration of Fused QKNorm+RoPE. (#4611)
|
2025-05-28 11:20:45 +08:00 |
|
modeling_qwen_moe.py
|
refactor: extract and reuse filter_weights. (#4681)
|
2025-05-27 19:48:01 +08:00 |
|
modeling_qwen.py
|
feat: [nvbugs/5261055][nvbugs/5170160] non-invasive pipeline parallelism (#4034)
|
2025-05-16 04:16:53 +08:00 |
|
modeling_siglip.py
|
Cherry pick feat/llama4 to main (#4739)
|
2025-05-30 05:28:40 +08:00 |
|
modeling_utils.py
|
fix: [nvbugs/5310520] disable embed_tokens's TP when DP enabled for llama model. (#4758)
|
2025-05-30 18:04:08 +08:00 |
|
modeling_vila.py
|
feat: [nvbugs/5261055][nvbugs/5170160] non-invasive pipeline parallelism (#4034)
|
2025-05-16 04:16:53 +08:00 |