mirror of
https://github.com/NVIDIA/TensorRT-LLM.git
synced 2026-01-14 06:27:45 +08:00
* Add TRT-LLM Gen MOE to Deepseek fix fused moe rebase bug. Fix atol in test_fp4_gemm_quantize.py fix fused moe rebase bug. Fix FusedMoe. Disable 2nd routing kernel preexit Bump routing reduction to fp32 Disable PDL for fc1 [DEBUG] Lift token limit to 16k [Bugfix] Token limit to 16k + fp32 routing + tanh Make fp8 tileN 8 Fix FP8 MoE + Remove redundent temp output for FP4 [FP8-only] Avoid wasting CTAs for activation kernel fix: unblock FP8 weightloading with trtllm-gen Remove max_token limit for trtllm-gen path perf: avoid type-conversion and fill_ from aten Minor fix Signed-off-by: Hao Lu <haolu@nvidia.com> * Fix rebase issues Signed-off-by: Hao Lu <haolu@nvidia.com> * Fix compile issue Signed-off-by: Zongfei Jing <20381269+zongfeijing@users.noreply.github.com> * CI clean Signed-off-by: Zongfei Jing <20381269+zongfeijing@users.noreply.github.com> --------- Signed-off-by: Hao Lu <haolu@nvidia.com> Signed-off-by: Zongfei Jing <20381269+zongfeijing@users.noreply.github.com> Co-authored-by: Zongfei Jing <20381269+zongfeijing@users.noreply.github.com> |
||
|---|---|---|
| .. | ||
| baichuan | ||
| bert | ||
| bloom | ||
| chatglm | ||
| clip | ||
| cogvlm | ||
| commandr | ||
| dbrx | ||
| deepseek_v1 | ||
| deepseek_v2 | ||
| dit | ||
| eagle | ||
| enc_dec | ||
| falcon | ||
| gemma | ||
| gpt | ||
| gptj | ||
| gptneox | ||
| grok | ||
| llama | ||
| mamba | ||
| medusa | ||
| mllama | ||
| mmdit_sd3 | ||
| mpt | ||
| multimodal_encoders | ||
| nemotron_nas | ||
| opt | ||
| phi | ||
| phi3 | ||
| qwen | ||
| recurrentgemma | ||
| redrafter | ||
| stdit | ||
| unet | ||
| __init__.py | ||
| automodel.py | ||
| convert_utils.py | ||
| generation_mixin.py | ||
| model_weights_loader.py | ||
| modeling_utils.py | ||