| .. |
|
__init__.py
|
chore: Rename nvsmall to nemotron nas (#3447)
|
2025-04-10 23:16:52 +08:00 |
|
.gitkeep
|
Update TensorRT-LLM (#2755)
|
2025-02-11 03:01:00 +00:00 |
|
modeling_auto.py
|
Add initial EAGLE-3 implementation (#3035)
|
2025-03-29 22:31:24 +08:00 |
|
modeling_bert.py
|
Refactor imports inside tensorrt_llm._torch. (#3015)
|
2025-03-26 11:01:07 +08:00 |
|
modeling_deepseekv3.py
|
Raise error for PP + MTP (#3244)
|
2025-04-03 04:45:31 +08:00 |
|
modeling_llama.py
|
test: add torch flow test case in qa test list (#3404)
|
2025-04-11 16:57:41 +08:00 |
|
modeling_llava_next.py
|
Update TensorRT-LLM (#2873)
|
2025-03-11 21:13:42 +08:00 |
|
modeling_mamba_hybrid.py
|
Add initial EAGLE-3 implementation (#3035)
|
2025-03-29 22:31:24 +08:00 |
|
modeling_mixtral.py
|
feat: Optionally split MoE inputs into chunks to reduce GPU memory usage (#3104)
|
2025-04-01 16:07:02 +08:00 |
|
modeling_mllama.py
|
fix: mllama e2e pytorch flow fix (#3397)
|
2025-04-11 17:33:15 +08:00 |
|
modeling_multimodal_encoder.py
|
Update TensorRT-LLM (#2873)
|
2025-03-11 21:13:42 +08:00 |
|
modeling_multimodal_utils.py
|
Update TensorRT-LLM (#2873)
|
2025-03-11 21:13:42 +08:00 |
|
modeling_nemotron_nas.py
|
chore: Rename nvsmall to nemotron nas (#3447)
|
2025-04-10 23:16:52 +08:00 |
|
modeling_nemotron.py
|
Add initial EAGLE-3 implementation (#3035)
|
2025-03-29 22:31:24 +08:00 |
|
modeling_qwen2vl.py
|
feat: Add Qwen2.5-VL and refactor Qwen2-VL (#3156)
|
2025-04-10 04:09:03 +08:00 |
|
modeling_qwen_moe.py
|
feat: add qwen2 moe to torch flow; fix wrong imported KvCacheConfig in gpqa… (#3369)
|
2025-04-10 22:45:57 +08:00 |
|
modeling_qwen.py
|
feat: Add Qwen2.5-VL and refactor Qwen2-VL (#3156)
|
2025-04-10 04:09:03 +08:00 |
|
modeling_utils.py
|
Add Llama 4 (#3302)
|
2025-04-09 03:35:21 +08:00 |
|
modeling_vila.py
|
Add Llama 4 (#3302)
|
2025-04-09 03:35:21 +08:00 |