TensorRT-LLMs/tensorrt_llm
Kaiyu Xie 746394e990
[TRTLLM-5516] perf: replicate dummy request for cuda graph padding (cherry-pick #4729) (#5190)
Signed-off-by: QI JUN <22017000+QiJune@users.noreply.github.com>
Signed-off-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com>
Co-authored-by: QI JUN <22017000+QiJune@users.noreply.github.com>
2025-06-14 00:36:15 +08:00
..
_torch [TRTLLM-5516] perf: replicate dummy request for cuda graph padding (cherry-pick #4729) (#5190) 2025-06-14 00:36:15 +08:00
auto_parallel chore: Deprecate autopp. (#4471) 2025-05-21 13:50:11 +08:00
bench [https://nvbugspro.nvidia.com/bug/5323820] Fix chunking equation for disabled case. (#4964) 2025-06-06 15:51:10 +08:00
commands Breaking change: perf: Enable scheduling overlap by default (#4174) 2025-05-15 14:27:36 +08:00
evaluate Add llama4 disagg accuracy tests (#4336) 2025-05-19 21:55:08 +08:00
executor fix: [nvbug 5321627] handle cases when TRT backend return more logits than output tokens (#4921) 2025-06-06 07:12:42 +08:00
inputs [TRTLLM-5054][fix] Removing repeated loading of input processor (#4161) 2025-05-16 08:04:58 +08:00
layers [https://nvbugs/5289907][fix] Restore per-channel pre-quant (#4545) 2025-05-23 19:46:53 +08:00
llmapi fix: llmapi-launch add add trtllm-bench test with engine building (#4… (#4550) 2025-06-01 08:38:01 +08:00
models fix:https://nvbugs/5214239 (#4718) 2025-05-29 09:36:31 +08:00
plugin feat: Low Precision Allreduce for PCIe based GPU (#4344) 2025-05-20 06:53:46 +08:00
quantization chore: bump version to 0.19.0 (#3598) (#3841) 2025-04-29 16:57:22 +08:00
runtime [https://nvbugs/5238105] fix: ModelRunnerCpp num_return_sequences (#3951) 2025-06-06 12:31:11 +02:00
scaffolding [TRTLLM-4638] feat(scaffolding): update Reward Controller to PRM specific controller with step split (#4337) 2025-05-19 17:53:41 +08:00
serve [https://nvbugspro.nvidia.com/bug/5243740][fix] deduce default max_tokens for trtllm-serve (#4265) 2025-05-19 00:34:40 +08:00
tools fix: Mistral Small vision encoder with BS>1 (#4713) 2025-05-28 12:49:28 +08:00
__init__.py fix: revert https://github.com/NVIDIA/TensorRT-LLM/pull/3858 (#3928) 2025-04-29 11:26:13 +08:00
_common.py Update (#2978) 2025-03-23 16:39:35 +08:00
_dlpack_utils.py feat: Add MNNVL MoE A2A support (#3504) 2025-04-25 17:29:08 +08:00
_ipc_utils.py fix: Proper error bubbling for PyExecutor (#3321) 2025-04-15 14:49:46 +08:00
_mnnvl_utils.py fix: Remove real size allocation (#4396) 2025-05-18 19:13:22 +08:00
_utils.py feat: [nvbugs/5261055][nvbugs/5170160] non-invasive pipeline parallelism (#4034) 2025-05-16 04:16:53 +08:00
builder.py chore: remove usernames from comments (#3291) 2025-04-05 13:44:28 +08:00
disaggregated_params.py Update TensorRT-LLM (#2936) 2025-03-18 21:25:19 +08:00
functional.py feat: Low Precision Allreduce for PCIe based GPU (#4344) 2025-05-20 06:53:46 +08:00
graph_rewriting.py Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
logger.py Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
lora_manager.py add changes for fp8, nemotron-nas, API (#4180) 2025-05-18 23:27:25 +08:00
mapping.py fix: [nvbugs/5287097] Align PP layer distribution between pytorch and TRT flow. (#4399) 2025-05-19 14:25:36 -07:00
module.py Update (#2978) 2025-03-23 16:39:35 +08:00
network.py chore: remove usernames from comments (#3291) 2025-04-05 13:44:28 +08:00
parameter.py fix:https://nvbugs/5234033 enable starcoder trt-flow with transforme… (#3909) 2025-05-15 11:16:45 +08:00
profiler.py test [TRTLLM-4477,TRTLLM-4481]: Accuracy test improvement (Part 3.5): Support GSM8K and GPQA (#3483) 2025-04-22 07:38:16 +08:00
prompt_adapter_manager.py Update TensorRT-LLM (#2333) 2024-10-15 15:28:40 +08:00
python_plugin.py refactor: use x is None instead of x == None. (#4244) 2025-05-15 20:00:04 +08:00
sampling_params.py feat: Support the Structural Tag in guided decoding (#4066) 2025-05-12 17:24:50 +08:00
top_model_mixin.py Update TensorRT-LLM (#2053) 2024-07-30 21:25:01 +08:00
version.py chore: bump version to 0.20.0 (#4469) 2025-05-20 15:27:29 +08:00