mirror of
https://github.com/NVIDIA/TensorRT-LLM.git
synced 2026-01-25 21:22:57 +08:00
* Add TRT-LLM Gen MOE to Deepseek fix fused moe rebase bug. Fix atol in test_fp4_gemm_quantize.py fix fused moe rebase bug. Fix FusedMoe. Disable 2nd routing kernel preexit Bump routing reduction to fp32 Disable PDL for fc1 [DEBUG] Lift token limit to 16k [Bugfix] Token limit to 16k + fp32 routing + tanh Make fp8 tileN 8 Fix FP8 MoE + Remove redundent temp output for FP4 [FP8-only] Avoid wasting CTAs for activation kernel fix: unblock FP8 weightloading with trtllm-gen Remove max_token limit for trtllm-gen path perf: avoid type-conversion and fill_ from aten Minor fix Signed-off-by: Hao Lu <haolu@nvidia.com> * Fix rebase issues Signed-off-by: Hao Lu <haolu@nvidia.com> * Fix compile issue Signed-off-by: Zongfei Jing <20381269+zongfeijing@users.noreply.github.com> * CI clean Signed-off-by: Zongfei Jing <20381269+zongfeijing@users.noreply.github.com> --------- Signed-off-by: Hao Lu <haolu@nvidia.com> Signed-off-by: Zongfei Jing <20381269+zongfeijing@users.noreply.github.com> Co-authored-by: Zongfei Jing <20381269+zongfeijing@users.noreply.github.com> |
||
|---|---|---|
| .. | ||
| __init__.py | ||
| .gitkeep | ||
| modeling_auto.py | ||
| modeling_bert.py | ||
| modeling_deepseekv3.py | ||
| modeling_llama.py | ||
| modeling_llava_next.py | ||
| modeling_mamba_hybrid.py | ||
| modeling_mixtral.py | ||
| modeling_mllama.py | ||
| modeling_multimodal_encoder.py | ||
| modeling_multimodal_utils.py | ||
| modeling_nemotron_h.py | ||
| modeling_nemotron_nas.py | ||
| modeling_nemotron.py | ||
| modeling_qwen2vl.py | ||
| modeling_qwen_moe.py | ||
| modeling_qwen.py | ||
| modeling_utils.py | ||
| modeling_vila.py | ||