mirror of
https://github.com/NVIDIA/TensorRT-LLM.git
synced 2026-01-14 06:27:45 +08:00
* Add TRT-LLM Gen MOE to Deepseek fix fused moe rebase bug. Fix atol in test_fp4_gemm_quantize.py fix fused moe rebase bug. Fix FusedMoe. Disable 2nd routing kernel preexit Bump routing reduction to fp32 Disable PDL for fc1 [DEBUG] Lift token limit to 16k [Bugfix] Token limit to 16k + fp32 routing + tanh Make fp8 tileN 8 Fix FP8 MoE + Remove redundent temp output for FP4 [FP8-only] Avoid wasting CTAs for activation kernel fix: unblock FP8 weightloading with trtllm-gen Remove max_token limit for trtllm-gen path perf: avoid type-conversion and fill_ from aten Minor fix Signed-off-by: Hao Lu <haolu@nvidia.com> * Fix rebase issues Signed-off-by: Hao Lu <haolu@nvidia.com> * Fix compile issue Signed-off-by: Zongfei Jing <20381269+zongfeijing@users.noreply.github.com> * CI clean Signed-off-by: Zongfei Jing <20381269+zongfeijing@users.noreply.github.com> --------- Signed-off-by: Hao Lu <haolu@nvidia.com> Signed-off-by: Zongfei Jing <20381269+zongfeijing@users.noreply.github.com> Co-authored-by: Zongfei Jing <20381269+zongfeijing@users.noreply.github.com> |
||
|---|---|---|
| .. | ||
| apps | ||
| auto_deploy | ||
| bindings/executor | ||
| cpp/executor | ||
| cpp_library | ||
| disaggregated | ||
| dora | ||
| draft_target_model | ||
| eagle | ||
| infinitebench | ||
| language_adapter | ||
| llm-api | ||
| llm-eval/lm-eval-harness | ||
| lookahead | ||
| medusa | ||
| models | ||
| openai_triton | ||
| prompt_lookup | ||
| python_plugin | ||
| pytorch | ||
| quantization | ||
| redrafter | ||
| sample_weight_stripping | ||
| scaffolding | ||
| serve | ||
| constraints.txt | ||
| eval_long_context.py | ||
| generate_checkpoint_config.py | ||
| generate_xgrammar_tokenizer_info.py | ||
| gpqa_llmapi.py | ||
| hf_lora_convert.py | ||
| mmlu_llmapi.py | ||
| mmlu.py | ||
| run.py | ||
| summarize.py | ||
| utils.py | ||