TensorRT-LLMs/tensorrt_llm/_torch
ixlmar 7e6d06d5d7
feat: estimate GPU mem. usage w/ minimal KV cache (#4574)
Signed-off-by: ixlmar <206748156+ixlmar@users.noreply.github.com>
2025-05-30 10:40:45 +02:00
..
attention_backend [TRTLLM-5070][feat] Support FP8 KV Cache Reuse for MLA (#4535) 2025-05-23 19:47:50 +08:00
auto_deploy chore [BREAKING CHANGE]: Flatten PyTorchConfig knobs into TorchLlmArgs (#4603) 2025-05-28 18:43:04 +08:00
compilation [https://nvbugs/5123103][fix] Fix torch compile for DeepSeekV3 (#3952) 2025-05-19 22:12:25 +08:00
custom_ops [perf] Reduce the workspace size of FP4 activation scales for MoE (#4303) 2025-05-30 09:03:52 +08:00
distributed Release 0.20 to main (#4577) 2025-05-28 16:25:33 +08:00
models Cherry pick feat/llama4 to main (#4739) 2025-05-30 05:28:40 +08:00
modules [perf] Reduce the workspace size of FP4 activation scales for MoE (#4303) 2025-05-30 09:03:52 +08:00
peft feat: support multi lora adapters and TP (#3885) 2025-05-08 23:45:45 +08:00
pyexecutor feat: estimate GPU mem. usage w/ minimal KV cache (#4574) 2025-05-30 10:40:45 +02:00
speculative Cherry pick feat/llama4 to main (#4739) 2025-05-30 05:28:40 +08:00
__init__.py Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
autotuner.py Downgrade the logger level for fallback tactic warning. (#4440) 2025-05-19 18:26:54 +08:00
llm.py test: [TRTLLM-4334] Create 1.0 criteria scope from API stability references (#3069) 2025-03-26 18:14:35 +08:00
metadata.py feat: no-cache attention in PyTorch workflow (#3085) 2025-04-05 01:54:32 +08:00
model_config.py Cherry pick feat/llama4 to main (#4739) 2025-05-30 05:28:40 +08:00
utils.py feat: [nvbugs/5261055][nvbugs/5170160] non-invasive pipeline parallelism (#4034) 2025-05-16 04:16:53 +08:00