..
_torch
feat: large-scale EP(part 2: MoE Load Balancer - core utilities) ( #4384 )
2025-05-20 17:53:48 +08:00
auto_parallel
fix: Fix NVLink version decoding. ( #3996 )
2025-05-06 13:56:50 +08:00
bench
test(perf): Add some Llama-3_3-Nemotron-Super-49B-v1 integration-perf-tests (TRT flow, trtllm-bench) ( #4128 )
2025-05-19 12:00:48 -07:00
commands
Breaking change: perf: Enable scheduling overlap by default ( #4174 )
2025-05-15 14:27:36 +08:00
evaluate
Add llama4 disagg accuracy tests ( #4336 )
2025-05-19 21:55:08 +08:00
executor
[ https://nvbugspro.nvidia.com/bug/5243740 ][fix] deduce default max_tokens for trtllm-serve ( #4265 )
2025-05-19 00:34:40 +08:00
inputs
[TRTLLM-5054][fix] Removing repeated loading of input processor ( #4161 )
2025-05-16 08:04:58 +08:00
layers
refactor: use x is None instead of x == None. ( #4244 )
2025-05-15 20:00:04 +08:00
llmapi
chore: cleanup perf_evaluator code ( #3833 )
2025-05-19 13:21:36 +08:00
models
fix: [nvbugs/5287097] Align PP layer distribution between pytorch and TRT flow. ( #4399 )
2025-05-19 14:25:36 -07:00
plugin
feat: Low Precision Allreduce for PCIe based GPU ( #4344 )
2025-05-20 06:53:46 +08:00
quantization
chore: bump version to 0.19.0 ( #3598 ) ( #3841 )
2025-04-29 16:57:22 +08:00
runtime
fix: [nvbugs/5287097] Align PP layer distribution between pytorch and TRT flow. ( #4399 )
2025-05-19 14:25:36 -07:00
scaffolding
[TRTLLM-4638] feat(scaffolding): update Reward Controller to PRM specific controller with step split ( #4337 )
2025-05-19 17:53:41 +08:00
serve
[ https://nvbugspro.nvidia.com/bug/5243740 ][fix] deduce default max_tokens for trtllm-serve ( #4265 )
2025-05-19 00:34:40 +08:00
tools
feat: Support Mistral Small 3.1 24B VLM in TRT workflow ( #4183 )
2025-05-14 03:47:22 +08:00
__init__.py
fix: revert https://github.com/NVIDIA/TensorRT-LLM/pull/3858 ( #3928 )
2025-04-29 11:26:13 +08:00
_common.py
Update ( #2978 )
2025-03-23 16:39:35 +08:00
_dlpack_utils.py
feat: Add MNNVL MoE A2A support ( #3504 )
2025-04-25 17:29:08 +08:00
_ipc_utils.py
fix: Proper error bubbling for PyExecutor ( #3321 )
2025-04-15 14:49:46 +08:00
_mnnvl_utils.py
fix: Remove real size allocation ( #4396 )
2025-05-18 19:13:22 +08:00
_utils.py
feat: [nvbugs/5261055][nvbugs/5170160] non-invasive pipeline parallelism ( #4034 )
2025-05-16 04:16:53 +08:00
builder.py
chore: remove usernames from comments ( #3291 )
2025-04-05 13:44:28 +08:00
disaggregated_params.py
Update TensorRT-LLM ( #2936 )
2025-03-18 21:25:19 +08:00
functional.py
feat: Low Precision Allreduce for PCIe based GPU ( #4344 )
2025-05-20 06:53:46 +08:00
graph_rewriting.py
Update TensorRT-LLM ( #2755 )
2025-02-11 03:01:00 +00:00
logger.py
Update TensorRT-LLM ( #2873 )
2025-03-11 21:13:42 +08:00
lora_manager.py
add changes for fp8, nemotron-nas, API ( #4180 )
2025-05-18 23:27:25 +08:00
mapping.py
fix: [nvbugs/5287097] Align PP layer distribution between pytorch and TRT flow. ( #4399 )
2025-05-19 14:25:36 -07:00
module.py
Update ( #2978 )
2025-03-23 16:39:35 +08:00
network.py
chore: remove usernames from comments ( #3291 )
2025-04-05 13:44:28 +08:00
parameter.py
fix: https://nvbugs/5234033 enable starcoder trt-flow with transforme… ( #3909 )
2025-05-15 11:16:45 +08:00
profiler.py
test [TRTLLM-4477,TRTLLM-4481]: Accuracy test improvement (Part 3.5): Support GSM8K and GPQA ( #3483 )
2025-04-22 07:38:16 +08:00
prompt_adapter_manager.py
Update TensorRT-LLM ( #2333 )
2024-10-15 15:28:40 +08:00
python_plugin.py
refactor: use x is None instead of x == None. ( #4244 )
2025-05-15 20:00:04 +08:00
sampling_params.py
feat: Support the Structural Tag in guided decoding ( #4066 )
2025-05-12 17:24:50 +08:00
top_model_mixin.py
Update TensorRT-LLM ( #2053 )
2024-07-30 21:25:01 +08:00
version.py
chore: bump version to 0.21.0rc0 ( #4465 )
2025-05-20 12:19:50 +08:00