TensorRT-LLMs/tensorrt_llm/_torch
tomeras91 f7dbc1435a
[None] [chore] Mamba cache in separate file (#6796)
Signed-off-by: Tomer Asida <57313761+tomeras91@users.noreply.github.com>
2025-08-15 13:42:51 +03:00
..
attention_backend [None] [feat] Add Tencent HunYuanMoEV1 model support (#5521) 2025-08-15 06:56:44 +08:00
auto_deploy [None][feat] Add GPT OSS support for AutoDeploy (#6641) 2025-08-12 14:03:22 -04:00
compilation [https://nvbugs/5252313][fix] Fix torch compile + MTP (#6554) 2025-08-05 10:31:29 -04:00
custom_ops [TRTLLM-4501][feat] AutoTuner tuning config refactor and valid tactic generalization. (#6545) 2025-08-13 16:25:22 +08:00
debug Add debug hook to support dump tensor data and add new debug functions easily (#5182) 2025-06-24 17:45:28 +08:00
distributed [https://nvbugs/5445466][fix] fix deepseek r1 hang by not enabling mnnvl by default (#6860) 2025-08-14 22:36:56 +08:00
models [None][fix] Fix perfect router. (#6797) 2025-08-14 20:09:08 -07:00
modules [None] [feat] Add Tencent HunYuanMoEV1 model support (#5521) 2025-08-15 06:56:44 +08:00
peft feat: support multi lora adapters and TP (#3885) 2025-05-08 23:45:45 +08:00
pyexecutor [None] [chore] Mamba cache in separate file (#6796) 2025-08-15 13:42:51 +03:00
shared_tensor [1/N][TRTLLM-5195][feat] Share PyTorch tensor between processes (#5396) 2025-07-10 05:12:53 +09:00
speculative [https://nvbugs/5455651][fix] Make ngram use XQA attention on Blackwell (#6873) 2025-08-14 18:36:19 -04:00
__init__.py [nvbugs/5401156][fix] Avoid import all models when import trtllm._common (#6266) 2025-07-27 23:29:21 -04:00
autotuner.py [TRTLLM-4501][feat] AutoTuner tuning config refactor and valid tactic generalization. (#6545) 2025-08-13 16:25:22 +08:00
expert_statistic.py Add MTP support for Online EPLB (#5213) 2025-06-25 07:58:13 +08:00
llm.py [TRTLLM-5208][BREAKING CHANGE] chore: make pytorch LLM the default (#5312) 2025-06-20 03:01:10 +08:00
metadata.py feat: no-cache attention in PyTorch workflow (#3085) 2025-04-05 01:54:32 +08:00
model_config.py [None][fix] Correct reporting of torch_dtype for ModelConfig class. (#6800) 2025-08-14 22:46:20 -04:00
utils.py [None][perf] Improve the performance of online EPLB on Hopper by better overlapping (#6624) 2025-08-12 09:25:13 +08:00
virtual_memory.py [TRTLLM-4406][feat] LLM sleep & wakeup Part 1: virtual device memory (#5034) 2025-08-04 13:51:01 +08:00