| .. |
|
attention_backend
|
[TRTLLM-7192][feat] optimize MLA chunked prefill && support fp8 mla chunked prefill (#7477)
|
2025-09-15 21:43:49 +08:00 |
|
auto_deploy
|
[TRTLLM-4629] [feat] Add support of CUDA13 and sm103 devices (#7568)
|
2025-09-16 09:56:18 +08:00 |
|
compilation
|
[None][chore] Mass integration of release/1.0 - 3rd (#7519)
|
2025-09-08 14:03:04 +08:00 |
|
custom_ops
|
[TRTLLM-6898][feat] Add Cute DSL nvfp4 linear op (#7632)
|
2025-09-16 14:25:26 +08:00 |
|
cute_dsl_kernels
|
[TRTLLM-6898][feat] Add Cute DSL nvfp4 linear op (#7632)
|
2025-09-16 14:25:26 +08:00 |
|
debug
|
Add debug hook to support dump tensor data and add new debug functions easily (#5182)
|
2025-06-24 17:45:28 +08:00 |
|
distributed
|
[TRTLLM-7361][feat] KV cache transfer for uneven pp (#7117)
|
2025-09-08 13:37:46 -04:00 |
|
models
|
[TRTLLM-4629] [feat] Add support of CUDA13 and sm103 devices (#7568)
|
2025-09-16 09:56:18 +08:00 |
|
modules
|
[TRTLLM-6898][feat] Add Cute DSL nvfp4 linear op (#7632)
|
2025-09-16 14:25:26 +08:00 |
|
peft
|
[TRTLLM-7346][fix] Improve performance of PyTorchModelEngine._get_lora_params_from_requests (#7033)
|
2025-08-25 10:37:40 +03:00 |
|
pyexecutor
|
[TRTLLM-4629] [feat] Add support of CUDA13 and sm103 devices (#7568)
|
2025-09-16 09:56:18 +08:00 |
|
shared_tensor
|
[1/N][TRTLLM-5195][feat] Share PyTorch tensor between processes (#5396)
|
2025-07-10 05:12:53 +09:00 |
|
speculative
|
[TRTLLM-6668][feat] Enable overlap scheduler for two-model spec decoding (#7651)
|
2025-09-16 07:33:44 +08:00 |
|
__init__.py
|
[nvbugs/5401156][fix] Avoid import all models when import trtllm._common (#6266)
|
2025-07-27 23:29:21 -04:00 |
|
autotuner.py
|
[None][chore] Mass integration of release/1.0 - 3rd (#7519)
|
2025-09-08 14:03:04 +08:00 |
|
expert_statistic.py
|
Add MTP support for Online EPLB (#5213)
|
2025-06-25 07:58:13 +08:00 |
|
flashinfer_utils.py
|
[None][ci] move unittests to sub-directories (#6635)
|
2025-08-20 05:42:22 -04:00 |
|
hostfunc.py
|
[TRTLLM-7028][feat] Enable guided decoding with speculative decoding (part 2: one-model engine) (#6948)
|
2025-09-03 15:16:11 -07:00 |
|
llm.py
|
[TRTLLM-5208][BREAKING CHANGE] chore: make pytorch LLM the default (#5312)
|
2025-06-20 03:01:10 +08:00 |
|
metadata.py
|
[None][feat] Use Separate QKV Input Layout for Context MLA (#6538)
|
2025-08-19 22:04:48 +08:00 |
|
model_config.py
|
[https://nvbugs/5498165][fix] fix permission error for config file lock (#7656)
|
2025-09-11 10:36:51 +08:00 |
|
utils.py
|
[https://nvbugs/5485102][fix] Correctly set stride for piecewise outp… (#7442)
|
2025-09-04 10:48:15 +08:00 |
|
virtual_memory.py
|
[TRTLLM-4406][feat] LLM sleep & wakeup Part 1: virtual device memory (#5034)
|
2025-08-04 13:51:01 +08:00 |