..
auto_deploy
[Architecture] Refactor FusedMoE ( #4790 )
2025-06-03 14:02:19 +08:00
compilation
[TRTLLM-3105][feat] Add Piecewise CUDA Graph Support ( #3804 )
2025-05-09 11:04:01 +08:00
modeling
[Architecture] Refactor FusedMoE ( #4790 )
2025-06-03 14:02:19 +08:00
modules
[Architecture] Refactor FusedMoE ( #4790 )
2025-06-03 14:02:19 +08:00
multi_gpu
chore [BREAKING CHANGE]: Flatten PyTorchConfig knobs into TorchLlmArgs ( #4603 )
2025-05-28 18:43:04 +08:00
multi_gpu_modeling
chore [BREAKING CHANGE]: Flatten PyTorchConfig knobs into TorchLlmArgs ( #4603 )
2025-05-28 18:43:04 +08:00
speculative
chore [BREAKING CHANGE]: Flatten PyTorchConfig knobs into TorchLlmArgs ( #4603 )
2025-05-28 18:43:04 +08:00
thop
[TRTLLM-4783][feat] Mamba2 kernel updates for Nemotron-H ( #4494 )
2025-06-01 13:56:44 +03:00
helpers.py
Update TensorRT-LLM ( #2936 )
2025-03-18 21:25:19 +08:00
pattern_watcher.py
[TRTLLM-3105][feat] Add Piecewise CUDA Graph Support ( #3804 )
2025-05-09 11:04:01 +08:00
test_attention_mla.py
[Feat] add chunked-attention kernels on Hopper (for llama4) ( #4291 )
2025-05-19 09:57:10 -07:00
test_attention_no_cache.py
refactor(test): remove random context sequence lengths and set seed for reproducibility in attention tests ( #3919 )
2025-04-29 10:08:04 +08:00
test_attention.py
reduce num layers in attention test ( #3509 )
2025-04-14 12:43:59 +08:00
test_autotuner.py
chore: Mass Integration 0.19 ( #4255 )
2025-05-16 10:53:25 +02:00
test_flashinfer_attention.py
Add thread leak check and fix thread/memory leak issues. ( #3270 )
2025-04-08 19:03:18 +08:00
test_flashinfer_star_attn.py
Add thread leak check and fix thread/memory leak issues. ( #3270 )
2025-04-08 19:03:18 +08:00
test_fp8_per_tensor_scale_tllmg_gemm.py
Cherry-pick trtllm-gen from feat/llama4 to main ( #4086 )
2025-05-08 14:13:01 -07:00
test_group_rmn_norm.py
feat: Add heuristic for GroupRMSNorm kernel selection. ( #4047 )
2025-05-13 08:52:53 +08:00
test_mnnvl_memory.py
feat: Add MNNVL MoE A2A support ( #3504 )
2025-04-25 17:29:08 +08:00
test_overlap_scheduler_input.json
refactor: Unify request order in TRT and PyTorch workflow ( #4096 )
2025-05-20 18:49:27 +02:00
test_overlap_scheduler.py
chore [BREAKING CHANGE]: Flatten PyTorchConfig knobs into TorchLlmArgs ( #4603 )
2025-05-28 18:43:04 +08:00
test_pytorch_model_engine.py
chore: move all distributed related codes into _torch.distributed directory ( #3511 )
2025-04-15 08:39:17 +08:00
test_resource_manager.py
feat: support multi lora adapters and TP ( #3885 )
2025-05-08 23:45:45 +08:00
test_return_logits.py
[TRTLLM-4987][feat] Partial support of context logits in TRTLLMSampler ( #4538 )
2025-06-01 03:32:43 +08:00
test_trtllm_sampler.py
chore [BREAKING CHANGE]: Flatten PyTorchConfig knobs into TorchLlmArgs ( #4603 )
2025-05-28 18:43:04 +08:00
test_vanilla_attention.py
Add thread leak check and fix thread/memory leak issues. ( #3270 )
2025-04-08 19:03:18 +08:00