TensorRT-LLMs/tensorrt_llm/_torch
Jinyang Yuan 97f7e12588
[fix] Fix perf regression caused by MoE autotuner when using DeepEPLowLatency (#6288)
Signed-off-by: Jinyang Yuan <154768711+jinyangyuan-nvidia@users.noreply.github.com>
2025-07-28 01:37:11 -04:00
..
attention_backend [TRTLLM-6650][feat] Enhance beam search support with CUDA graph integration (#6217) 2025-07-24 18:04:41 +02:00
auto_deploy [AutoDeploy] merge feat/ad-2025-07-07 (#6196) 2025-07-23 05:11:04 +08:00
compilation [https://nvbugs/5340941] - fix: Correct custom ops used by Qwen3 Moe … (#6285) 2025-07-25 14:49:45 +08:00
custom_ops [fix] Fix perf regression caused by MoE autotuner when using DeepEPLowLatency (#6288) 2025-07-28 01:37:11 -04:00
debug Add debug hook to support dump tensor data and add new debug functions easily (#5182) 2025-06-24 17:45:28 +08:00
distributed [fix][nvbugs/5399355] Fix Lamport buffer clear issue for MNNVL TwoShot Allreduce and add FP16 support. (#6237) 2025-07-25 08:01:40 +08:00
models [TRTLLM-6445] feat: Enable AllReduce-associated fusion patterns in Llama3/4. (#6205) 2025-07-28 09:36:26 +08:00
modules [fix] Fix perf regression caused by MoE autotuner when using DeepEPLowLatency (#6288) 2025-07-28 01:37:11 -04:00
peft feat: support multi lora adapters and TP (#3885) 2025-05-08 23:45:45 +08:00
pyexecutor [https://nvbugs/5402719][fix]: Add cuda graph dummy requests to the spec_resource_manager (#6258) 2025-07-26 20:32:39 -04:00
shared_tensor [1/N][TRTLLM-5195][feat] Share PyTorch tensor between processes (#5396) 2025-07-10 05:12:53 +09:00
speculative fix: remove cudaStreamSynchronize when using relaxed acceptance (#5262) 2025-07-28 09:18:41 +08:00
__init__.py [nvbugs/5401156][fix] Avoid import all models when import trtllm._common (#6266) 2025-07-27 23:29:21 -04:00
autotuner.py [TRTLLM-5770] feat: Integrate TRT-LLM Gen FP8 block scale MoE with Pytorch workflow kernel autotuner (#5207) 2025-06-17 21:01:56 +08:00
expert_statistic.py Add MTP support for Online EPLB (#5213) 2025-06-25 07:58:13 +08:00
llm.py [TRTLLM-5208][BREAKING CHANGE] chore: make pytorch LLM the default (#5312) 2025-06-20 03:01:10 +08:00
metadata.py feat: no-cache attention in PyTorch workflow (#3085) 2025-04-05 01:54:32 +08:00
model_config.py Add basic Nemo Ckpt Lora Loading in pytorch flow (#6019) 2025-07-22 19:42:45 -07:00
utils.py [fix] Fix perf regression caused by MoE autotuner when using DeepEPLowLatency (#6288) 2025-07-28 01:37:11 -04:00