..
__init__.py
feat: Nemotron-H model support ( #3430 )
2025-04-16 14:05:56 -07:00
.gitkeep
Update TensorRT-LLM ( #2755 )
2025-02-11 03:01:00 +00:00
modeling_auto.py
Fix create_weights in attention ( #3692 )
2025-04-24 07:30:00 +08:00
modeling_bert.py
feat: Support cos_sin_cache in all cases. ( #3517 )
2025-04-16 13:48:44 +08:00
modeling_deepseekv3.py
feat: Add MNNVL MoE A2A support ( #3504 )
2025-04-25 17:29:08 +08:00
modeling_llama.py
[fix] Fix flashinfer + speculation issues ( #3686 )
2025-04-28 14:34:22 -04:00
modeling_llava_next.py
feat: llama4 input processor ( #3383 )
2025-04-25 16:47:14 -07:00
modeling_mamba_hybrid.py
feat: Support cos_sin_cache in all cases. ( #3517 )
2025-04-16 13:48:44 +08:00
modeling_mixtral.py
feat: Support cos_sin_cache in all cases. ( #3517 )
2025-04-16 13:48:44 +08:00
modeling_mllama.py
chore: Use ellipsis as default value to detect whether residual argument is provided ( #3626 )
2025-04-17 12:31:58 +08:00
modeling_multimodal_encoder.py
Update TensorRT-LLM ( #2873 )
2025-03-11 21:13:42 +08:00
modeling_multimodal_utils.py
feat: llama4 input processor ( #3383 )
2025-04-25 16:47:14 -07:00
modeling_nemotron_h.py
Fix rotary_emb param in NemotronH attention ( #3646 )
2025-04-16 21:03:07 -07:00
modeling_nemotron_nas.py
Fix create_weights in attention ( #3692 )
2025-04-24 07:30:00 +08:00
modeling_nemotron.py
feat: Support cos_sin_cache in all cases. ( #3517 )
2025-04-16 13:48:44 +08:00
modeling_qwen2vl.py
feat: llama4 input processor ( #3383 )
2025-04-25 16:47:14 -07:00
modeling_qwen_moe.py
feat: Support cos_sin_cache in all cases. ( #3517 )
2025-04-16 13:48:44 +08:00
modeling_qwen.py
feat: Support cos_sin_cache in all cases. ( #3517 )
2025-04-16 13:48:44 +08:00
modeling_utils.py
Fix fp8 kvcache ( #3877 )
2025-04-29 10:31:10 +08:00
modeling_vila.py
feat: llama4 input processor ( #3383 )
2025-04-25 16:47:14 -07:00