TensorRT-LLMs/tensorrt_llm/_torch
QI JUN d93a5e04b5
Chore: remove unused variables (#5314)
Signed-off-by: QI JUN <22017000+QiJune@users.noreply.github.com>
2025-06-24 22:27:32 +08:00
..
attention_backend [feat] Piecewise cuda graph support for MLA (#4467) 2025-06-17 18:58:38 +08:00
auto_deploy Chore: remove unused variables (#5314) 2025-06-24 22:27:32 +08:00
compilation [feat] Piecewise cuda graph support for MLA (#4467) 2025-06-17 18:58:38 +08:00
custom_ops feat: Misc Opt for large scale EP (#5374) 2025-06-20 13:11:31 +08:00
debug Add debug hook to support dump tensor data and add new debug functions easily (#5182) 2025-06-24 17:45:28 +08:00
distributed Feat/ds r1 min latency opt round3, add router gemm, fused a gemm, PDL (#4560) 2025-06-14 17:36:22 +08:00
models chore: delete mamba hybrid, since it is now called NemotronH (#5409) 2025-06-24 16:27:31 +08:00
modules feat: Misc Opt for large scale EP (#5374) 2025-06-20 13:11:31 +08:00
peft feat: support multi lora adapters and TP (#3885) 2025-05-08 23:45:45 +08:00
pyexecutor Chore: remove unused variables (#5314) 2025-06-24 22:27:32 +08:00
speculative feature: unify new_tokens format sample state to trtllm sampler new_tokens format (#4401) 2025-06-23 10:38:37 -07:00
__init__.py [TRTLLM-5208][BREAKING CHANGE] chore: make pytorch LLM the default (#5312) 2025-06-20 03:01:10 +08:00
autotuner.py [TRTLLM-5770] feat: Integrate TRT-LLM Gen FP8 block scale MoE with Pytorch workflow kernel autotuner (#5207) 2025-06-17 21:01:56 +08:00
expert_statistic.py feat: large-scale EP(part 5: Static EP load balancer with offline statistics) (#4695) 2025-06-02 01:25:02 +08:00
llm.py [TRTLLM-5208][BREAKING CHANGE] chore: make pytorch LLM the default (#5312) 2025-06-20 03:01:10 +08:00
metadata.py feat: no-cache attention in PyTorch workflow (#3085) 2025-04-05 01:54:32 +08:00
model_config.py [TRTLLM-5825][fix] Fix torch LoRA TP (#5338) 2025-06-19 09:12:00 +03:00
utils.py feat: Enhance AutoTuner inference path and code readability (#4466) 2025-06-04 10:53:11 +08:00