..
attention_backend
[ https://nvbugs/5448426 ][fix] Fix illegal memory access in cuda graph ( #7127 )
2025-08-25 10:04:34 +08:00
auto_deploy
[None][opt] ADP schedule balance optimization ( #6061 )
2025-08-06 09:38:02 +08:00
compilation
[ https://nvbugs/5383702 ][fix] test_llm_api_pytorch.py::TestLlama3_1_8BInstruct::test_fp8_4gpus ( #6889 )
2025-08-21 08:56:42 +08:00
custom_ops
[ https://nvbugs/5392414 ] [fix] For release 1.0 cherry pick. Add customized default routing method ( #7068 )
2025-08-21 20:06:50 +08:00
debug
Add debug hook to support dump tensor data and add new debug functions easily ( #5182 )
2025-06-24 17:45:28 +08:00
distributed
[fix][nvbugs/5399355] Fix Lamport buffer clear issue for MNNVL TwoShot Allreduce and add FP16 support. ( #6237 )
2025-07-25 08:01:40 +08:00
models
[None][feat] Skip prefetching consolidated safetensors when appropriate ( #7225 )
2025-08-26 09:40:17 -07:00
modules
[ https://nvbugs/5461712 ] [fix] Disable deep_gemm for Qwen3 due to accuracy issues ( #7170 )
2025-08-23 05:26:12 -04:00
peft
feat: support multi lora adapters and TP ( #3885 )
2025-05-08 23:45:45 +08:00
pyexecutor
[TRTLLM-6825][fix] Update lora for phi4-mm ( #7149 )
2025-08-23 20:57:00 +08:00
shared_tensor
[1/N][TRTLLM-5195][feat] Share PyTorch tensor between processes ( #5396 )
2025-07-10 05:12:53 +09:00
speculative
[ https://nvbugs/5252313 ][fix] Fix torch compile + MTP ( #6554 )
2025-08-05 10:31:29 -04:00
__init__.py
[nvbugs/5401156][fix] Avoid import all models when import trtllm._common ( #6266 )
2025-07-27 23:29:21 -04:00
autotuner.py
[None][fix] fix log_once usage ( #7210 )
2025-08-26 19:13:03 +08:00
expert_statistic.py
Add MTP support for Online EPLB ( #5213 )
2025-06-25 07:58:13 +08:00
llm.py
[TRTLLM-5208][BREAKING CHANGE] chore: make pytorch LLM the default ( #5312 )
2025-06-20 03:01:10 +08:00
metadata.py
feat: no-cache attention in PyTorch workflow ( #3085 )
2025-04-05 01:54:32 +08:00
model_config.py
Bugfix/fix nemotron nas lora support ( #6380 )
2025-07-31 13:39:35 -04:00
utils.py
[fix] Fix perf regression caused by MoE autotuner when using DeepEPLowLatency ( #6288 )
2025-07-28 01:37:11 -04:00
virtual_memory.py
[TRTLLM-4406][feat] LLM sleep & wakeup Part 1: virtual device memory ( #5034 )
2025-08-04 13:51:01 +08:00