TensorRT-LLMs/tensorrt_llm
Pengyun Lin 0a0ac7b5dc
[nvbug 5304752][fix] enhance _check_arguments to filter illegal requests for pytorch backend (#5541)
Signed-off-by: Pengyun Lin <81065165+LinPoly@users.noreply.github.com>
2025-07-07 19:26:13 +08:00
..
_torch [nvbug/5302638][nvbugs/5310314] fix _handle_cancelled_requests (#5532) 2025-07-07 16:51:24 +08:00
auto_parallel Release 0.20 to main (#4577) 2025-05-28 16:25:33 +08:00
bench [https://nvbugspro.nvidia.com/bug/5351333][fix] Update to chunking calculation. (#5625) 2025-07-02 17:48:02 +08:00
commands test: Add json_mode_eval for guided decoding evaluation (#5179) 2025-06-16 10:03:55 +08:00
evaluate test: Add json_mode_eval for guided decoding evaluation (#5179) 2025-06-16 10:03:55 +08:00
executor fix[nvbug5298640]: trtllm-llmapi-launch multiple LLM instances (#4727) 2025-06-19 06:13:53 +08:00
inputs feat: Basic skeleton for Gemma3 VLM (#5108) 2025-06-13 17:27:04 +08:00
layers refactoring: port customized kernels with public cutlass version (#5027) 2025-06-13 16:19:31 +08:00
llmapi [nvbug 5304752][fix] enhance _check_arguments to filter illegal requests for pytorch backend (#5541) 2025-07-07 19:26:13 +08:00
models feat: Add support for fp8 rowwise quantization (#4876) 2025-06-14 06:37:48 -07:00
plugin feat: Add support for fp8 rowwise quantization (#4876) 2025-06-14 06:37:48 -07:00
quantization feat: Add w4a8_mxfp4_fp8 quantization recipe. (#4867) 2025-06-16 11:30:57 +08:00
runtime [nvbug/5354825] Fix nougat test image url (#5496) 2025-06-26 10:10:18 +08:00
scaffolding chore [BREAKING CHANGE]: Flatten PyTorchConfig knobs into TorchLlmArgs (#4603) 2025-05-28 18:43:04 +08:00
serve [nvbug 5004744][fix] rewrite completion API to avoid repetitive tokens (#5201) 2025-07-07 15:06:49 +09:00
tools chore: Mass integration of release/0.20 (#4898) 2025-06-08 23:26:26 +08:00
__init__.py feat: TRTLLM-5941 Upgrade xgrammar to 0.1.18 (#5364) 2025-06-25 14:10:50 +08:00
_common.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
_dlpack_utils.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
_ipc_utils.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
_mnnvl_utils.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
_utils.py [feat] Add llm args to tune python gc threshold (#5141) 2025-06-16 17:45:22 +08:00
builder.py fix: build_config in TorchLlmArgs and avoid arbitrary args (#4972) 2025-06-15 17:51:56 -07:00
disaggregated_params.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
functional.py Feat/ds r1 min latency opt round3, add router gemm, fused a gemm, PDL (#4560) 2025-06-14 17:36:22 +08:00
graph_rewriting.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
logger.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
lora_manager.py Enable trtllm-bench to run LoRA and add basic e2e perf testing capability for LoRA in PyT flow (#5130) 2025-06-15 18:54:04 +03:00
mapping.py fix: Fix moe_ep_groups/moe_cluster_groups in Mapping. (#4555) 2025-05-23 10:41:49 +08:00
module.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
network.py chore: remove usernames from comments (#3291) 2025-04-05 13:44:28 +08:00
parameter.py fix:https://nvbugs/5234033 enable starcoder trt-flow with transforme… (#3909) 2025-05-15 11:16:45 +08:00
profiler.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
prompt_adapter_manager.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
python_plugin.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
sampling_params.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
top_model_mixin.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
version.py chore: bump version to 0.21.0 (#5325) 2025-06-19 12:58:44 +08:00