TensorRT-LLMs/tensorrt_llm/_torch
Mike Iovine 64e3bfa054
[None][fix] Fix KV cache recompute in draft_target spec decode (#7348)
Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com>
2025-09-03 15:04:14 -04:00
..
attention_backend [https://nvbugs/5374016][fix] improve error message (#6893) 2025-09-01 11:02:31 +08:00
auto_deploy [None][chore] Use llm args in create_py_executor (#7239) 2025-09-01 16:27:55 -07:00
compilation [TRTLLM-6633][feat] Padding for piecewise cudagraph (#6750) 2025-08-26 18:31:33 -04:00
custom_ops [None][chore] Wrap the swiglu into custom op to avoid redundant device copy. (#7021) 2025-08-27 13:02:10 +08:00
debug Add debug hook to support dump tensor data and add new debug functions easily (#5182) 2025-06-24 17:45:28 +08:00
distributed [https://nvbugs/5445466][fix] Bypass MLP TP split for MNNVL in DeepSeek V3 to avoid hanging. (#6886) 2025-08-28 15:17:48 -07:00
models [TRTLLM-7353][feat] Implement capturable drafting loops for speculation (#7100) 2025-09-01 14:37:44 -04:00
modules [None][doc] fix example in docstring (#7410) 2025-09-02 11:59:49 +03:00
peft [TRTLLM-7346][fix] Improve performance of PyTorchModelEngine._get_lora_params_from_requests (#7033) 2025-08-25 10:37:40 +03:00
pyexecutor [https://nvbugs/5472947][fix] wait on isend handles before reusing buffers (#7462) 2025-09-03 13:20:02 +05:30
shared_tensor [1/N][TRTLLM-5195][feat] Share PyTorch tensor between processes (#5396) 2025-07-10 05:12:53 +09:00
speculative [None][fix] Fix KV cache recompute in draft_target spec decode (#7348) 2025-09-03 15:04:14 -04:00
__init__.py [nvbugs/5401156][fix] Avoid import all models when import trtllm._common (#6266) 2025-07-27 23:29:21 -04:00
autotuner.py [None][perf] Make finalize fusion part of the tactic selection logic (#6915) 2025-08-21 14:08:03 -07:00
expert_statistic.py Add MTP support for Online EPLB (#5213) 2025-06-25 07:58:13 +08:00
flashinfer_utils.py [None][ci] move unittests to sub-directories (#6635) 2025-08-20 05:42:22 -04:00
llm.py [TRTLLM-5208][BREAKING CHANGE] chore: make pytorch LLM the default (#5312) 2025-06-20 03:01:10 +08:00
metadata.py [None][feat] Use Separate QKV Input Layout for Context MLA (#6538) 2025-08-19 22:04:48 +08:00
model_config.py [https://nvbugs/5445466][fix] Eliminate race when loading HF dynamic modules (#7268) 2025-08-29 12:36:30 +08:00
utils.py [TRTLLM-6633][feat] Padding for piecewise cudagraph (#6750) 2025-08-26 18:31:33 -04:00
virtual_memory.py [TRTLLM-4406][feat] LLM sleep & wakeup Part 1: virtual device memory (#5034) 2025-08-04 13:51:01 +08:00