..
__init__.py
feat: Support Gemma3-1b-it in Pytorch workflow ( #3999 )
2025-05-14 14:02:44 +08:00
.gitkeep
Update TensorRT-LLM ( #2755 )
2025-02-11 03:01:00 +00:00
modeling_auto.py
Fix create_weights in attention ( #3692 )
2025-04-24 07:30:00 +08:00
modeling_bert.py
feat: Support cos_sin_cache in all cases. ( #3517 )
2025-04-16 13:48:44 +08:00
modeling_clip.py
feat: add Pytorch support of Vision Encoder for multimodal models ( #3791 )
2025-05-03 05:13:47 +08:00
modeling_deepseekv3.py
Adding two-shot allreduce kernel and mnnvl multicasting buffer ( #4216 )
2025-05-22 03:42:36 +08:00
modeling_gemma3.py
feat: [nvbugs/5261055][nvbugs/5170160] non-invasive pipeline parallelism ( #4034 )
2025-05-16 04:16:53 +08:00
modeling_llama.py
[feat] Integrate Hopper chunked attention kernels ( #4330 )
2025-05-22 17:10:57 -04:00
modeling_llava_next.py
feat: add Pytorch support of Vision Encoder for multimodal models ( #3791 )
2025-05-03 05:13:47 +08:00
modeling_mamba_hybrid.py
feat: Add pp support for hybrid attn/mamba model ( #4358 )
2025-05-19 14:47:45 +08:00
modeling_mistral.py
feat: [nvbugs/5261055][nvbugs/5170160] non-invasive pipeline parallelism ( #4034 )
2025-05-16 04:16:53 +08:00
modeling_mixtral.py
perf: Eliminate the need for attention DP padding when possible ( #3439 )
2025-05-17 13:30:55 +08:00
modeling_mllama.py
refactor: use x is None instead of x == None. ( #4244 )
2025-05-15 20:00:04 +08:00
modeling_multimodal_encoder.py
Update TensorRT-LLM ( #2873 )
2025-03-11 21:13:42 +08:00
modeling_multimodal_utils.py
Adding option to specify a set of token ids for multimodal tokens ( #4107 )
2025-05-07 12:15:41 +08:00
modeling_nemotron_h.py
feat: Add pp support for hybrid attn/mamba model ( #4358 )
2025-05-19 14:47:45 +08:00
modeling_nemotron_nas.py
add changes for fp8, nemotron-nas, API ( #4180 )
2025-05-18 23:27:25 +08:00
modeling_nemotron.py
feat: Support cos_sin_cache in all cases. ( #3517 )
2025-04-16 13:48:44 +08:00
modeling_qwen2vl.py
feat: enhance trtllm serve multimodal ( #3757 )
2025-05-15 16:16:31 -07:00
modeling_qwen3_moe.py
[Fix][Qwen3] fix bug of qwen3 fp4 workflow with EP ( #4575 )
2025-05-23 13:34:05 +08:00
modeling_qwen3.py
feat: [nvbugs/5261055][nvbugs/5170160] non-invasive pipeline parallelism ( #4034 )
2025-05-16 04:16:53 +08:00
modeling_qwen_moe.py
perf: Eliminate the need for attention DP padding when possible ( #3439 )
2025-05-17 13:30:55 +08:00
modeling_qwen.py
feat: [nvbugs/5261055][nvbugs/5170160] non-invasive pipeline parallelism ( #4034 )
2025-05-16 04:16:53 +08:00
modeling_siglip.py
feat: add Pytorch support of Vision Encoder for multimodal models ( #3791 )
2025-05-03 05:13:47 +08:00
modeling_utils.py
fix: skip weights defined in create_weights for pp. ( #4447 )
2025-05-21 10:13:20 +08:00
modeling_vila.py
feat: [nvbugs/5261055][nvbugs/5170160] non-invasive pipeline parallelism ( #4034 )
2025-05-16 04:16:53 +08:00