..
checkpoints
[TRTLLM-10062][feat] Enable MTP for Nemotron Super ( #10754 )
2026-01-26 11:23:26 -05:00
__init__.py
[None][feat] MiniMax M2 support ( #10532 )
2026-01-14 17:38:58 +08:00
modeling_auto.py
[TRTC-122][feat] Eagle3 Specdec UX improvements ( #10124 )
2026-01-22 07:24:11 -08:00
modeling_bert.py
[None][chore] replace print_colored_debug with logger_debug ( #8417 )
2025-10-22 17:54:38 +08:00
modeling_clip.py
[None][feat] Support kv_cahce_reuse for HyperCLOVAX-Vision model ( #7789 )
2025-10-21 11:11:24 +09:00
modeling_deepseekv3.py
[None][feat] Perfect routing for Deepseek models ( #11127 )
2026-01-30 23:46:35 -05:00
modeling_exaone4.py
[ https://nvbugs/5569713 ][fix] Disable fp8 deep gemm for EXAONE-4.0-32B-FP8 ( #8429 )
2025-11-20 12:43:13 -05:00
modeling_exaone_moe.py
[None][feat] K-EXAONE MTP support ( #10796 )
2026-01-22 13:43:00 +09:00
modeling_gemma3.py
[ #6186 ][feat] Introduce QKNormRoPEAttention module ( #6830 )
2025-09-05 14:04:41 +02:00
modeling_gemma3vl.py
[None][fix] Multimodal InputProcessor dummy builder fix ( #8916 )
2025-11-19 22:32:21 -08:00
modeling_glm.py
[None][feat] GLM-4.5-Air support ( #10653 )
2026-01-22 11:42:09 +08:00
modeling_gpt_oss.py
[None][feat] Support nvfp4 for gptoss ( #8956 )
2026-01-04 08:57:44 -05:00
modeling_hunyuan_dense.py
[None][feat] Add Tencent HunYuanDenseV1 model support ( #7081 )
2025-09-23 09:27:29 +08:00
modeling_hunyuan_moe.py
[TRTLLM-8958][feat] and [TRTLLM-8960]: create ConfigurableMoE and support TRTLLMGenFusedMoE as backend ( #9486 )
2025-12-01 08:37:07 +08:00
modeling_hyperclovax.py
[None][fix] Multimodal InputProcessor dummy builder fix ( #8916 )
2025-11-19 22:32:21 -08:00
modeling_llama_min_latency.py
[ https://nvbugs/5803813 ][fix] Fix llama 4 min latency ( #10724 )
2026-01-25 18:12:21 +08:00
modeling_llama.py
[ https://nvbugs/5691730 ][fix] Have LoRa bf16 ckpts work with Llama 3.3-70B-fp8 ( #9808 )
2026-02-02 16:26:46 +08:00
modeling_llava_next.py
[TRTLLM-9409][feat] Pass MRoPE tensors for EPD disagg ( #9758 )
2025-12-22 06:32:49 -05:00
modeling_minimaxm2.py
[None][feat] MiniMax M2 support ( #10532 )
2026-01-14 17:38:58 +08:00
modeling_mistral_large3.py
[None][feat] Support Mistral Large3 LLM part ( #9820 )
2025-12-13 11:44:27 +08:00
modeling_mistral.py
[None][fix] Mistral large 3 few code refine ( #10405 )
2026-01-08 06:38:49 -05:00
modeling_mixtral.py
[TRTLLM-7408][feat] Wrap MOE with custom op. ( #7277 )
2025-09-09 12:18:56 -04:00
modeling_mllama.py
feat : support duplicate_kv_weight for qwen3 blockwise scale ( #5459 )
2025-06-30 11:49:22 +08:00
modeling_multimodal_encoder.py
[None][chore] update torch_dtype -> dtype in 'transformers' ( #8263 )
2025-10-15 17:09:30 +09:00
modeling_multimodal_utils.py
[TRTLLM-8310][feat] Add Qwen3-VL-MoE ( #9689 )
2025-12-15 20:05:20 -08:00
modeling_nemotron_h.py
[TRTLLM-10062][feat] Enable MTP for Nemotron Super ( #10754 )
2026-01-26 11:23:26 -05:00
modeling_nemotron_nano.py
[None][fix] Multimodal InputProcessor dummy builder fix ( #8916 )
2025-11-19 22:32:21 -08:00
modeling_nemotron_nas.py
[None][feat] add specdec to nemotron nas ( #8985 )
2025-11-19 19:28:35 +01:00
modeling_nemotron.py
feat: Remove not used padding_idx in models ( #5385 )
2025-06-25 17:19:59 +08:00
modeling_phi3.py
[ https://nvbugs/5540752 ][fix] Support quantized Phi4 MM models ( #8190 )
2025-10-20 06:36:09 -04:00
modeling_phi4mm.py
[None][fix] Multimodal InputProcessor dummy builder fix ( #8916 )
2025-11-19 22:32:21 -08:00
modeling_pixtral.py
[TRTLLM-7442][model] Remove unnecessary D2H copies ( #7273 )
2025-09-03 23:14:20 -04:00
modeling_qwen2vl.py
[None][fix] Enable AttentionDP on Qwen3-VL and fix test ( #10435 )
2026-01-10 00:13:26 +09:00
modeling_qwen3_moe.py
[TRTLLM-9992][perf] Enable PDL for CuteDSL kernels and overlap MoeOutputMemset ( #10043 )
2025-12-20 03:12:41 -05:00
modeling_qwen3_next.py
[None][feat] Layer-wise benchmarks: make model init more general and support weights loading ( #10562 )
2026-01-13 19:17:03 +08:00
modeling_qwen3.py
[ #4745 ][fix] Pass lora_params through Qwen2/3 model forward ( #10174 )
2026-01-07 15:30:17 +08:00
modeling_qwen3vl_moe.py
[TRTLLM-8310][feat] Add Qwen3-VL-MoE ( #9689 )
2025-12-15 20:05:20 -08:00
modeling_qwen3vl.py
[None][fix] Add missing absolute pe in Qwen3-VL Vision Encoder ( #11065 )
2026-01-30 09:59:36 +09:00
modeling_qwen_moe.py
[TRTLLM-7408][feat] Wrap MOE with custom op. ( #7277 )
2025-09-09 12:18:56 -04:00
modeling_qwen.py
[ #4745 ][fix] Pass lora_params through Qwen2/3 model forward ( #10174 )
2026-01-07 15:30:17 +08:00
modeling_radio.py
[TRTLLM-8579][feat] Support quantized model for nano-v2-vlm ( #8304 )
2025-10-16 09:44:11 +08:00
modeling_seedoss.py
[None][feat] Support Seed-OSS model in pytorch backend ( #7496 )
2025-09-24 03:57:12 -07:00
modeling_siglip.py
[None][feat] Support kv_cahce_reuse for HyperCLOVAX-Vision model ( #7789 )
2025-10-21 11:11:24 +09:00
modeling_speculative.py
[TRTLLM-10062][feat] Enable MTP for Nemotron Super ( #10754 )
2026-01-26 11:23:26 -05:00
modeling_starcoder2.py
[TRTLLM-7967][feat] Adding Starcoder2 PyTorch Backend Support ( #8923 )
2025-11-24 11:23:22 -08:00
modeling_utils.py
[ https://nvbugs/5781589 ][fix] Implement pp skip forward for all spec workers. ( #10578 )
2026-01-14 09:36:35 +08:00
modeling_vila.py
[None][fix] Multimodal InputProcessor dummy builder fix ( #8916 )
2025-11-19 22:32:21 -08:00