| .. |
|
attention_backend
|
[https://nvbugs/5467548][fix] DeepSeek illegal memory access. (#7298)
|
2025-08-29 12:19:03 +08:00 |
|
auto_deploy
|
[None][opt] ADP schedule balance optimization (#6061)
|
2025-08-06 09:38:02 +08:00 |
|
compilation
|
[https://nvbugs/5383702][fix] test_llm_api_pytorch.py::TestLlama3_1_8BInstruct::test_fp8_4gpus (#6889)
|
2025-08-21 08:56:42 +08:00 |
|
custom_ops
|
[https://nvbugs/5392414] [fix] For release 1.0 cherry pick. Add customized default routing method (#7068)
|
2025-08-21 20:06:50 +08:00 |
|
debug
|
Add debug hook to support dump tensor data and add new debug functions easily (#5182)
|
2025-06-24 17:45:28 +08:00 |
|
distributed
|
[fix][nvbugs/5399355] Fix Lamport buffer clear issue for MNNVL TwoShot Allreduce and add FP16 support. (#6237)
|
2025-07-25 08:01:40 +08:00 |
|
models
|
[None][feat] Skip prefetching consolidated safetensors when appropriate (#7225)
|
2025-08-26 09:40:17 -07:00 |
|
modules
|
[TRTLLM-7008][fix] cherrypick fix to 1.0 Add automatic shared memory delete if already exist (#7433)
|
2025-09-02 11:23:53 +08:00 |
|
peft
|
[TRTLLM-7346][fix] Improve performance of PyTorchModelEngine._get_lora_params_from_requests (#7203)
|
2025-08-28 16:06:32 +08:00 |
|
pyexecutor
|
[https://nvbugs/5474169][fix] seq_len mismatch between kv cache manager and graph attn metadata (#7606)
|
2025-09-09 08:32:31 +08:00 |
|
shared_tensor
|
[1/N][TRTLLM-5195][feat] Share PyTorch tensor between processes (#5396)
|
2025-07-10 05:12:53 +09:00 |
|
speculative
|
[https://nvbugs/5451426][fix] Avoid torch compile on full eagle3 worker (#7245)
|
2025-08-27 09:59:06 +08:00 |
|
__init__.py
|
[nvbugs/5401156][fix] Avoid import all models when import trtllm._common (#6266)
|
2025-07-27 23:29:21 -04:00 |
|
autotuner.py
|
[None][fix] fix log_once usage (#7210)
|
2025-08-26 19:13:03 +08:00 |
|
expert_statistic.py
|
Add MTP support for Online EPLB (#5213)
|
2025-06-25 07:58:13 +08:00 |
|
llm.py
|
[TRTLLM-5208][BREAKING CHANGE] chore: make pytorch LLM the default (#5312)
|
2025-06-20 03:01:10 +08:00 |
|
metadata.py
|
feat: no-cache attention in PyTorch workflow (#3085)
|
2025-04-05 01:54:32 +08:00 |
|
model_config.py
|
[https://nvbugs/5445466][fix] Eliminate race when loading HF dynamic modules (#7268) (#7379)
|
2025-08-30 17:44:24 +08:00 |
|
utils.py
|
[fix] Fix perf regression caused by MoE autotuner when using DeepEPLowLatency (#6288)
|
2025-07-28 01:37:11 -04:00 |
|
virtual_memory.py
|
[TRTLLM-4406][feat] LLM sleep & wakeup Part 1: virtual device memory (#5034)
|
2025-08-04 13:51:01 +08:00 |