| .. |
|
__init__.py
|
chore: Rename nvsmall to nemotron nas (#3447)
|
2025-04-10 23:16:52 +08:00 |
|
.gitkeep
|
Update TensorRT-LLM (#2755)
|
2025-02-11 03:01:00 +00:00 |
|
modeling_auto.py
|
Cache sin cos in model instead of global LRU cache. (#3378)
|
2025-04-14 11:19:09 +08:00 |
|
modeling_bert.py
|
Refactor imports inside tensorrt_llm._torch. (#3015)
|
2025-03-26 11:01:07 +08:00 |
|
modeling_deepseekv3.py
|
refactor: Remove _pp_forward. (#3496)
|
2025-04-14 09:49:44 +08:00 |
|
modeling_llama.py
|
refactor: Remove _pp_forward. (#3496)
|
2025-04-14 09:49:44 +08:00 |
|
modeling_llava_next.py
|
Update TensorRT-LLM (#2873)
|
2025-03-11 21:13:42 +08:00 |
|
modeling_mamba_hybrid.py
|
Add initial EAGLE-3 implementation (#3035)
|
2025-03-29 22:31:24 +08:00 |
|
modeling_mixtral.py
|
refactor: remove ParallelConfig in tensorrt_llm._torch.distributed module (#3370)
|
2025-04-11 15:34:20 -07:00 |
|
modeling_mllama.py
|
refactor: remove ParallelConfig in tensorrt_llm._torch.distributed module (#3370)
|
2025-04-11 15:34:20 -07:00 |
|
modeling_multimodal_encoder.py
|
Update TensorRT-LLM (#2873)
|
2025-03-11 21:13:42 +08:00 |
|
modeling_multimodal_utils.py
|
Update TensorRT-LLM (#2873)
|
2025-03-11 21:13:42 +08:00 |
|
modeling_nemotron_nas.py
|
refactor: remove ParallelConfig in tensorrt_llm._torch.distributed module (#3370)
|
2025-04-11 15:34:20 -07:00 |
|
modeling_nemotron.py
|
Add initial EAGLE-3 implementation (#3035)
|
2025-03-29 22:31:24 +08:00 |
|
modeling_qwen2vl.py
|
feat: Add Qwen2.5-VL and refactor Qwen2-VL (#3156)
|
2025-04-10 04:09:03 +08:00 |
|
modeling_qwen_moe.py
|
refactor: remove ParallelConfig in tensorrt_llm._torch.distributed module (#3370)
|
2025-04-11 15:34:20 -07:00 |
|
modeling_qwen.py
|
refactor: remove ParallelConfig in tensorrt_llm._torch.distributed module (#3370)
|
2025-04-11 15:34:20 -07:00 |
|
modeling_utils.py
|
refactor: Remove _pp_forward. (#3496)
|
2025-04-14 09:49:44 +08:00 |
|
modeling_vila.py
|
Add Llama 4 (#3302)
|
2025-04-09 03:35:21 +08:00 |