| .. |
|
attention_backend
|
fix: Skip rope scaling for local layers in Gemma3 VLM (#5857)
|
2025-07-09 10:10:33 +08:00 |
|
auto_deploy
|
[TRTLLM-5530][BREAKING CHANGE] refactor: LLM arglist rename mixed_sampler to enable_mixed_sampler (#5751)
|
2025-07-07 17:05:14 +08:00 |
|
compilation
|
[feat] Piecewise cuda graph support for MLA (#4467)
|
2025-06-17 18:58:38 +08:00 |
|
custom_ops
|
[TRTLLM-5812][feat] support FP8 row-wise dense GEMM in torch flow (#5615)
|
2025-07-07 18:04:57 +08:00 |
|
debug
|
Add debug hook to support dump tensor data and add new debug functions easily (#5182)
|
2025-06-24 17:45:28 +08:00 |
|
distributed
|
[feat] Support torch compile for attention dp (#5086)
|
2025-07-01 13:48:52 -04:00 |
|
models
|
[TRTLLM-6262] Fix Llama4 Scout FP4 crash issue (#5834)
|
2025-07-09 14:23:21 +08:00 |
|
modules
|
chore: some refactor on WideEP (#5727)
|
2025-07-09 14:26:57 +08:00 |
|
peft
|
feat: support multi lora adapters and TP (#3885)
|
2025-05-08 23:45:45 +08:00 |
|
pyexecutor
|
[fix] Catch inference failures in trtllm-bench (#5841)
|
2025-07-09 03:53:03 +03:00 |
|
speculative
|
[TRTLLM-5847][feat] Support n-gram speculative decoding with disagg (#5732)
|
2025-07-08 09:39:58 -04:00 |
|
__init__.py
|
[TRTLLM-5208][BREAKING CHANGE] chore: make pytorch LLM the default (#5312)
|
2025-06-20 03:01:10 +08:00 |
|
autotuner.py
|
[TRTLLM-5770] feat: Integrate TRT-LLM Gen FP8 block scale MoE with Pytorch workflow kernel autotuner (#5207)
|
2025-06-17 21:01:56 +08:00 |
|
expert_statistic.py
|
Add MTP support for Online EPLB (#5213)
|
2025-06-25 07:58:13 +08:00 |
|
llm.py
|
[TRTLLM-5208][BREAKING CHANGE] chore: make pytorch LLM the default (#5312)
|
2025-06-20 03:01:10 +08:00 |
|
metadata.py
|
feat: no-cache attention in PyTorch workflow (#3085)
|
2025-04-05 01:54:32 +08:00 |
|
model_config.py
|
Feat/pytorch vswa kvcachemanager (#5151)
|
2025-07-02 15:58:00 +08:00 |
|
utils.py
|
[feat] Support torch compile for attention dp (#5086)
|
2025-07-01 13:48:52 -04:00 |