..
_tensorrt_engine
[TRTLLM-5208][BREAKING CHANGE] chore: make pytorch LLM the default ( #5312 )
2025-06-20 03:01:10 +08:00
_torch
[TRTLLM-10318][feat] Fixing Nemotron sharding: support for sharding buffers ( #10319 )
2026-01-17 04:02:06 -05:00
bench
[None][chore] Print correct backend name in benchmark report ( #10597 )
2026-01-12 14:46:00 -05:00
commands
[None][chore] remove redundant retries while binding to arbitrary port ( #10452 )
2026-01-06 10:39:15 -05:00
evaluate
[None][feat] Support to export data in trtllm-eval ( #10075 )
2026-01-15 23:27:08 +08:00
executor
[TRTLLM-9735][feat] Add processed logprobs functionality to TorchSampler ( #9675 )
2026-01-16 10:52:41 -08:00
inputs
[TRTLLM-9522][feat] support image_embeds in OpenAI API ( #9715 )
2026-01-14 10:31:03 +01:00
layers
[None][fix] [Gemma3] Fix RoPE for local attention for Gemma3 ( #9961 )
2025-12-27 11:50:59 -08:00
llmapi
[TRTLLM-9735][feat] Add processed logprobs functionality to TorchSampler ( #9675 )
2026-01-16 10:52:41 -08:00
metrics
[None][feat] Add trtllm_ prefix for exposed metrics ( #8845 )
2025-11-06 15:27:18 +08:00
models
[TRTLLM-9465][fix] Swap TP-CP grouping order ( #10350 )
2026-01-05 20:08:03 +08:00
plugin
[ https://nvbugs/5788127 ][fix] Use uint64_t as the dtype of lamport_buffer_size to avoid overflow ( #10499 )
2026-01-13 17:16:22 +08:00
quantization
[None][fix] convert to CUDA tensor before calling _resmooth_kernel. ( #10770 )
2026-01-17 16:18:34 +08:00
runtime
[ #6425 ][fix] address CUDA stream sync issue in ModelRunnerCPP ( #6426 )
2025-12-12 13:33:22 +08:00
scaffolding
[None][feat] Deep Research Implemented with Scaffolding ( #8452 )
2025-11-06 10:33:28 +08:00
serve
[TRTLLM-9522][feat] support image_embeds in OpenAI API ( #9715 )
2026-01-14 10:31:03 +01:00
tokenizer
[ https://nvbugs/5684820 ][fix] fix the detokenizer issue for DeepSeek-v3.2 ( #10106 )
2025-12-22 10:56:33 +08:00
tools
[None][feat] Layer-wise benchmarks: make model init more general and support weights loading ( #10562 )
2026-01-13 19:17:03 +08:00
__init__.py
[TRTLLM-9736][feat] AsyncLLM and verl integ ( #9353 )
2025-12-11 09:33:25 -08:00
_common.py
[None][feat] Hang detection for executor loop and worker. ( #10480 )
2026-01-13 02:34:32 -05:00
_dlpack_utils.py
linting(python): Enable ruff on more files (wave 1/N) ( #5140 )
2025-06-14 19:19:34 +08:00
_ipc_utils.py
[None][refactor] Unify the usage of MPIDist and TorchDist. ( #10380 )
2026-01-14 14:05:47 +08:00
_mnnvl_utils.py
[ https://nvbugs/5791900 ][fix] Fix HelixCpMnnvlMemory init with PP ( #10533 )
2026-01-13 15:48:42 -05:00
_ray_utils.py
[TRTLLM-8511][feat] Add update_weights and sleep_wakeup support for rl integration ( #8302 )
2025-11-04 10:19:24 -08:00
_utils.py
[None][feat] Hang detection for executor loop and worker. ( #10480 )
2026-01-13 02:34:32 -05:00
builder.py
[TRTLLM-8684][chore] Migrate BuildConfig to Pydantic, add a Python wrapper for KVCacheType enum ( #8330 )
2025-10-28 09:17:26 -07:00
disaggregated_params.py
[TRTLLM-9409][feat] Pass MRoPE tensors for EPD disagg ( #9758 )
2025-12-22 06:32:49 -05:00
functional.py
[ #8921 ][feat] Added symetric memory AllReduce strategy ( #8919 )
2025-12-08 13:12:56 -08:00
graph_rewriting.py
linting(python): Enable ruff on more files (wave 1/N) ( #5140 )
2025-06-14 19:19:34 +08:00
logger.py
[None][chore] Mass integration of release/1.0 - 3rd ( #7519 )
2025-09-08 14:03:04 +08:00
lora_helper.py
[TRTLLM-8682][chore] Remove auto_parallel module ( #8329 )
2025-10-22 20:53:08 -04:00
lora_manager.py
[ https://nvbugs/5510879 ][fix] Fix pytorch & TRT-python flows fused LoRA adapter modules weight split with TP>1 ( #8063 )
2025-10-12 12:29:52 -07:00
mapping.py
[None][feat] Adding torch ext API for FusedAddRMSNormQuant kernel ( #9905 )
2026-01-15 07:29:15 +08:00
math_utils.py
perf: Optimize swizzle_sf, unswizzle_sf, reswizzle_sf ( #5318 )
2025-06-26 14:03:56 +08:00
module.py
[None][chroe] Rename TensorRT-LLM to TensorRT LLM for source code. ( #7851 )
2025-09-25 21:02:35 +08:00
network.py
[TRTLLM-8682][chore] Remove auto_parallel module ( #8329 )
2025-10-22 20:53:08 -04:00
parameter.py
fix: https://nvbugs/5234033 enable starcoder trt-flow with transforme… ( #3909 )
2025-05-15 11:16:45 +08:00
profiler.py
linting(python): Enable ruff on more files (wave 1/N) ( #5140 )
2025-06-14 19:19:34 +08:00
prompt_adapter_manager.py
linting(python): Enable ruff on more files (wave 1/N) ( #5140 )
2025-06-14 19:19:34 +08:00
python_plugin.py
linting(python): Enable ruff on more files (wave 1/N) ( #5140 )
2025-06-14 19:19:34 +08:00
ray_stub.py
[TRTLLM-8507][fix] Fix ray resource cleanup and error handling in LoRA test ( #8175 )
2025-10-14 23:46:30 +08:00
sampling_params.py
[TRTLLM-9735][feat] Add processed logprobs functionality to TorchSampler ( #9675 )
2026-01-16 10:52:41 -08:00
scheduling_params.py
[None][feat] Add support of scheduling attention dp request ( #6246 )
2025-08-01 20:38:01 -04:00
serialization.py
[TRTLLM-8682][chore] Remove auto_parallel module ( #8329 )
2025-10-22 20:53:08 -04:00
top_model_mixin.py
[TRTLLM-8683][chore] Migrate PluginConfig to Pydantic ( #8277 )
2025-10-17 16:13:22 -04:00
version.py
[None][chore] Bump version to 1.3.0rc0 ( #10681 )
2026-01-15 13:55:44 +08:00