TensorRT-LLMs/tensorrt_llm/_torch
Bo Li 9c4b8f66b4
feat: Integration of Fused QKNorm+RoPE. (#4611)
Signed-off-by: Bo Li <22713281+bobboli@users.noreply.github.com>
2025-05-28 11:20:45 +08:00
..
attention_backend [TRTLLM-5070][feat] Support FP8 KV Cache Reuse for MLA (#4535) 2025-05-23 19:47:50 +08:00
auto_deploy [AutoDeploy] Increased Model Coverage Mass Migration Week 1 (#4468) 2025-05-27 16:43:15 +08:00
compilation [https://nvbugs/5123103][fix] Fix torch compile for DeepSeekV3 (#3952) 2025-05-19 22:12:25 +08:00
custom_ops [feat] Integrate Hopper chunked attention kernels (#4330) 2025-05-22 17:10:57 -04:00
distributed feat: Skip sampler for intermediate pp stages. (#4514) 2025-05-26 10:08:51 +08:00
models feat: Integration of Fused QKNorm+RoPE. (#4611) 2025-05-28 11:20:45 +08:00
modules feat: Integration of Fused QKNorm+RoPE. (#4611) 2025-05-28 11:20:45 +08:00
peft feat: support multi lora adapters and TP (#3885) 2025-05-08 23:45:45 +08:00
pyexecutor feat: Integration of Fused QKNorm+RoPE. (#4611) 2025-05-28 11:20:45 +08:00
speculative feat: Skip sampler for intermediate pp stages. (#4514) 2025-05-26 10:08:51 +08:00
__init__.py Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
autotuner.py Downgrade the logger level for fallback tactic warning. (#4440) 2025-05-19 18:26:54 +08:00
llm.py test: [TRTLLM-4334] Create 1.0 criteria scope from API stability references (#3069) 2025-03-26 18:14:35 +08:00
metadata.py feat: no-cache attention in PyTorch workflow (#3085) 2025-04-05 01:54:32 +08:00
model_config.py feat: large-scale EP(part 4: Static EP load balancer integration) (#4615) 2025-05-26 18:25:11 +08:00
utils.py feat: [nvbugs/5261055][nvbugs/5170160] non-invasive pipeline parallelism (#4034) 2025-05-16 04:16:53 +08:00