TensorRT-LLMs/tensorrt_llm/_torch
ixlmar 4ee82fc0fd
chore: reduce code duplication (#4297)
Signed-off-by: ixlmar <206748156+ixlmar@users.noreply.github.com>
2025-05-15 09:25:37 +01:00
..
attention_backend feat: support kv cache reuse for MLA (#3571) 2025-05-15 15:22:21 +08:00
auto_deploy Breaking change: perf: Enable scheduling overlap by default (#4174) 2025-05-15 14:27:36 +08:00
compilation [TRTLLM-3105][feat] Add Piecewise CUDA Graph Support (#3804) 2025-05-09 11:04:01 +08:00
custom_ops feat: support kv cache reuse for MLA (#3571) 2025-05-15 15:22:21 +08:00
distributed Revert "feat: Low Precision Allreduce for PCIe based GPU" (#4340) 2025-05-15 09:52:39 +08:00
models Add allreduce and rmsnorm fusion for qwen3 (#4304) 2025-05-15 16:22:11 +08:00
modules feat: support kv cache reuse for MLA (#3571) 2025-05-15 15:22:21 +08:00
peft feat: support multi lora adapters and TP (#3885) 2025-05-08 23:45:45 +08:00
pyexecutor chore: reduce code duplication (#4297) 2025-05-15 09:25:37 +01:00
speculative [fix] Fix relaxed acceptance to support enabling it in context phase (#4126) 2025-05-09 14:11:14 +08:00
__init__.py Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
autotuner.py feat: Apply the new torch-flow compatible AutoTuner to both Fused MoE and NVFP4 Linear operators. (#3151) 2025-04-08 14:28:36 +08:00
llm.py test: [TRTLLM-4334] Create 1.0 criteria scope from API stability references (#3069) 2025-03-26 18:14:35 +08:00
metadata.py feat: no-cache attention in PyTorch workflow (#3085) 2025-04-05 01:54:32 +08:00
model_config.py feat: support multi lora adapters and TP (#3885) 2025-05-08 23:45:45 +08:00
pipeline_interface.py chore: bump version to 0.19.0 (#3598) (#3841) 2025-04-29 16:57:22 +08:00
utils.py [TRTLLM-3105][feat] Add Piecewise CUDA Graph Support (#3804) 2025-05-09 11:04:01 +08:00