TensorRT-LLMs/tensorrt_llm/_torch
QI JUN 1582361400
Chore: only pad one dummy request for attention dp scenario (#4664)
Signed-off-by: QI JUN <22017000+QiJune@users.noreply.github.com>
2025-05-27 14:56:22 +08:00
..
attention_backend [TRTLLM-5070][feat] Support FP8 KV Cache Reuse for MLA (#4535) 2025-05-23 19:47:50 +08:00
auto_deploy [AutoDeploy] HF factory improvements (#4371) 2025-05-19 20:13:43 -07:00
compilation [https://nvbugs/5123103][fix] Fix torch compile for DeepSeekV3 (#3952) 2025-05-19 22:12:25 +08:00
custom_ops [feat] Integrate Hopper chunked attention kernels (#4330) 2025-05-22 17:10:57 -04:00
distributed feat: Skip sampler for intermediate pp stages. (#4514) 2025-05-26 10:08:51 +08:00
models feat: large-scale EP(part 4: Static EP load balancer integration) (#4615) 2025-05-26 18:25:11 +08:00
modules feat: large-scale EP(part 4: Static EP load balancer integration) (#4615) 2025-05-26 18:25:11 +08:00
peft
pyexecutor Chore: only pad one dummy request for attention dp scenario (#4664) 2025-05-27 14:56:22 +08:00
speculative feat: Skip sampler for intermediate pp stages. (#4514) 2025-05-26 10:08:51 +08:00
__init__.py
autotuner.py Downgrade the logger level for fallback tactic warning. (#4440) 2025-05-19 18:26:54 +08:00
llm.py
metadata.py
model_config.py feat: large-scale EP(part 4: Static EP load balancer integration) (#4615) 2025-05-26 18:25:11 +08:00
utils.py feat: [nvbugs/5261055][nvbugs/5170160] non-invasive pipeline parallelism (#4034) 2025-05-16 04:16:53 +08:00