| .. |
|
attention_backend
|
[TRTLLM-3602][feat] support nvfp4 model and fp8 kv cache for MLA chunked prefill (Blackwell) (#5475)
|
2025-06-26 22:18:08 +08:00 |
|
auto_deploy
|
feat: Expose bias and FP8_MXFP4 MOE CUTLASS backend features to pytorch (#5410)
|
2025-06-27 12:29:34 +08:00 |
|
compilation
|
[feat] Piecewise cuda graph support for MLA (#4467)
|
2025-06-17 18:58:38 +08:00 |
|
custom_ops
|
feat: Expose bias and FP8_MXFP4 MOE CUTLASS backend features to pytorch (#5410)
|
2025-06-27 12:29:34 +08:00 |
|
debug
|
Add debug hook to support dump tensor data and add new debug functions easily (#5182)
|
2025-06-24 17:45:28 +08:00 |
|
distributed
|
Feat/ds r1 min latency opt round3, add router gemm, fused a gemm, PDL (#4560)
|
2025-06-14 17:36:22 +08:00 |
|
models
|
Fix : fix build for sm120 (#5265)
|
2025-06-27 20:42:47 +08:00 |
|
modules
|
feat: Expose bias and FP8_MXFP4 MOE CUTLASS backend features to pytorch (#5410)
|
2025-06-27 12:29:34 +08:00 |
|
peft
|
feat: support multi lora adapters and TP (#3885)
|
2025-05-08 23:45:45 +08:00 |
|
pyexecutor
|
refactor: Speculative decoding buffers part 2 (#5316)
|
2025-06-27 17:41:48 +02:00 |
|
speculative
|
[TRTLLM-5000][feat] NGrams V2 (#4569)
|
2025-06-27 23:00:17 +08:00 |
|
__init__.py
|
[TRTLLM-5208][BREAKING CHANGE] chore: make pytorch LLM the default (#5312)
|
2025-06-20 03:01:10 +08:00 |
|
autotuner.py
|
[TRTLLM-5770] feat: Integrate TRT-LLM Gen FP8 block scale MoE with Pytorch workflow kernel autotuner (#5207)
|
2025-06-17 21:01:56 +08:00 |
|
expert_statistic.py
|
Add MTP support for Online EPLB (#5213)
|
2025-06-25 07:58:13 +08:00 |
|
llm.py
|
[TRTLLM-5208][BREAKING CHANGE] chore: make pytorch LLM the default (#5312)
|
2025-06-20 03:01:10 +08:00 |
|
metadata.py
|
feat: no-cache attention in PyTorch workflow (#3085)
|
2025-04-05 01:54:32 +08:00 |
|
model_config.py
|
[TRTLLM-5825][fix] Fix torch LoRA TP (#5338)
|
2025-06-19 09:12:00 +03:00 |
|
utils.py
|
perf: Optimize swizzle_sf, unswizzle_sf, reswizzle_sf (#5318)
|
2025-06-26 14:03:56 +08:00 |