TensorRT-LLMs/tensorrt_llm
Anthony Chang 225d3a9001
[None][perf] TRTLLM MoE maps to lower tuning buckets when ep>1 (#9998)
Signed-off-by: Anthony Chang <27950904+rosenrodt@users.noreply.github.com>
2026-01-05 17:16:12 +01:00
..
_tensorrt_engine [TRTLLM-5208][BREAKING CHANGE] chore: make pytorch LLM the default (#5312) 2025-06-20 03:01:10 +08:00
_torch [None][perf] TRTLLM MoE maps to lower tuning buckets when ep>1 (#9998) 2026-01-05 17:16:12 +01:00
bench [TRTLLM-9089][chore] Port prepare_dataset into trtllm-bench (#9250) 2025-12-08 10:37:40 -08:00
commands [TRTLLM-8242][feat] Add stability tags for serve subcommand (#10012) 2026-01-05 14:16:15 +08:00
evaluate [https://nvbugs/5717993][fix] Add execution_stream across PyExecutor, KVCacheManager, PeftCacheManager to ensure proper CUDA stream synchronization between KV cache transfer operations and model forward kernels. (#10060) 2025-12-31 09:22:54 -08:00
executor [TRTLLM-9737][chore] Add rl perf reproduce script and enhance the robustness of Ray tests (#9939) 2025-12-24 15:27:01 +08:00
inputs [None][feat] Support VLM part for Mistral Large 3 (#10188) 2025-12-25 11:20:58 -05:00
layers [None][fix] [Gemma3] Fix RoPE for local attention for Gemma3 (#9961) 2025-12-27 11:50:59 -08:00
llmapi [https://nvbugs/5779534][fix] fix buffer reuse for CUDA graph attention metadata (#10393) 2026-01-05 09:43:44 +08:00
metrics [None][feat] Add trtllm_ prefix for exposed metrics (#8845) 2025-11-06 15:27:18 +08:00
models [TRTLLM-9465][fix] Swap TP-CP grouping order (#10350) 2026-01-05 20:08:03 +08:00
plugin [None][chore] Revert "[None][fix] change allreduce workspace dtype to torch.int64 t… (#9538) 2025-11-28 16:45:23 +08:00
quantization [None][feat] sm100 weight-only kernel (#10190) 2026-01-05 09:44:36 +08:00
runtime [#6425][fix] address CUDA stream sync issue in ModelRunnerCPP (#6426) 2025-12-12 13:33:22 +08:00
scaffolding [None][feat] Deep Research Implemented with Scaffolding (#8452) 2025-11-06 10:33:28 +08:00
serve [None][chore] Unify DS tool parser names (#10239) 2025-12-31 14:40:07 +08:00
tokenizer [https://nvbugs/5684820][fix] fix the detokenizer issue for DeepSeek-v3.2 (#10106) 2025-12-22 10:56:33 +08:00
tools [None][feat] Layer-wise benchmarks: support TEP balance, polish slurm scripts (#10237) 2026-01-05 11:23:04 +08:00
__init__.py [TRTLLM-9736][feat] AsyncLLM and verl integ (#9353) 2025-12-11 09:33:25 -08:00
_common.py [None][chroe] Rename TensorRT-LLM to TensorRT LLM for source code. (#7851) 2025-09-25 21:02:35 +08:00
_dlpack_utils.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
_ipc_utils.py [None][chore] Modify python ipc_util to align with C++ path (#9894) 2025-12-12 15:55:22 +08:00
_mnnvl_utils.py [TRTLLM-9493][feat] Custom AllToAll for helix parallelism (#9986) 2025-12-23 18:14:30 -08:00
_ray_utils.py [TRTLLM-8511][feat] Add update_weights and sleep_wakeup support for rl integration (#8302) 2025-11-04 10:19:24 -08:00
_utils.py [TRTLLM-7735][feat] Attention NVFP4 out support for torch compile (#9740) 2025-12-27 00:07:20 +08:00
builder.py [TRTLLM-8684][chore] Migrate BuildConfig to Pydantic, add a Python wrapper for KVCacheType enum (#8330) 2025-10-28 09:17:26 -07:00
disaggregated_params.py [TRTLLM-9409][feat] Pass MRoPE tensors for EPD disagg (#9758) 2025-12-22 06:32:49 -05:00
functional.py [#8921][feat] Added symetric memory AllReduce strategy (#8919) 2025-12-08 13:12:56 -08:00
graph_rewriting.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
logger.py [None][chore] Mass integration of release/1.0 - 3rd (#7519) 2025-09-08 14:03:04 +08:00
lora_helper.py [TRTLLM-8682][chore] Remove auto_parallel module (#8329) 2025-10-22 20:53:08 -04:00
lora_manager.py [https://nvbugs/5510879][fix] Fix pytorch & TRT-python flows fused LoRA adapter modules weight split with TP>1 (#8063) 2025-10-12 12:29:52 -07:00
mapping.py [TRTLLM-9465][fix] Swap TP-CP grouping order (#10350) 2026-01-05 20:08:03 +08:00
math_utils.py perf: Optimize swizzle_sf, unswizzle_sf, reswizzle_sf (#5318) 2025-06-26 14:03:56 +08:00
module.py [None][chroe] Rename TensorRT-LLM to TensorRT LLM for source code. (#7851) 2025-09-25 21:02:35 +08:00
network.py [TRTLLM-8682][chore] Remove auto_parallel module (#8329) 2025-10-22 20:53:08 -04:00
parameter.py fix:https://nvbugs/5234033 enable starcoder trt-flow with transforme… (#3909) 2025-05-15 11:16:45 +08:00
profiler.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
prompt_adapter_manager.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
python_plugin.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
ray_stub.py [TRTLLM-8507][fix] Fix ray resource cleanup and error handling in LoRA test (#8175) 2025-10-14 23:46:30 +08:00
sampling_params.py [None] [fix] Revert "[None] [feat] add eos_token_id in generation_config to sampling params" (#10002) 2025-12-15 08:52:52 -08:00
scheduling_params.py [None][feat] Add support of scheduling attention dp request (#6246) 2025-08-01 20:38:01 -04:00
serialization.py [TRTLLM-8682][chore] Remove auto_parallel module (#8329) 2025-10-22 20:53:08 -04:00
top_model_mixin.py [TRTLLM-8683][chore] Migrate PluginConfig to Pydantic (#8277) 2025-10-17 16:13:22 -04:00
version.py [None][chore] Bump version to 1.2.0rc7 (#10216) 2025-12-23 15:07:47 +08:00