TensorRT-LLMs/tensorrt_llm/_torch
hlu1 b4ed4b22f3
[Arch] Freeze model_config (#4814)
Signed-off-by: Hao Lu <14827759+hlu1@users.noreply.github.com@users.noreply.github.com>
Co-authored-by: Hao Lu <14827759+hlu1@users.noreply.github.com@users.noreply.github.com>
2025-06-04 02:51:35 +08:00
..
attention_backend [feat] Enable NVFP4 output for TRTLLM attention kernels (#4737) 2025-06-03 10:00:17 +08:00
auto_deploy [Architecture] Refactor FusedMoE (#4790) 2025-06-03 14:02:19 +08:00
compilation [https://nvbugs/5123103][fix] Fix torch compile for DeepSeekV3 (#3952) 2025-05-19 22:12:25 +08:00
custom_ops [feat] Enable NVFP4 output for TRTLLM attention kernels (#4737) 2025-06-03 10:00:17 +08:00
distributed Release 0.20 to main (#4577) 2025-05-28 16:25:33 +08:00
models [Arch] Freeze model_config (#4814) 2025-06-04 02:51:35 +08:00
modules [Architecture] Refactor FusedMoE (#4790) 2025-06-03 14:02:19 +08:00
peft feat: support multi lora adapters and TP (#3885) 2025-05-08 23:45:45 +08:00
pyexecutor [Arch] Freeze model_config (#4814) 2025-06-04 02:51:35 +08:00
speculative [https://nvbugs/5271281][fix] fix a pd+mtp accuracy issue (#4536) 2025-06-03 10:03:34 +08:00
__init__.py Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
autotuner.py Downgrade the logger level for fallback tactic warning. (#4440) 2025-05-19 18:26:54 +08:00
expert_statistic.py feat: large-scale EP(part 5: Static EP load balancer with offline statistics) (#4695) 2025-06-02 01:25:02 +08:00
llm.py test: [TRTLLM-4334] Create 1.0 criteria scope from API stability references (#3069) 2025-03-26 18:14:35 +08:00
metadata.py feat: no-cache attention in PyTorch workflow (#3085) 2025-04-05 01:54:32 +08:00
model_config.py [Arch] Freeze model_config (#4814) 2025-06-04 02:51:35 +08:00
utils.py [feat] Enable NVFP4 output for TRTLLM attention kernels (#4737) 2025-06-03 10:00:17 +08:00