TensorRT-LLMs/tensorrt_llm
Zhanrui Sun 516bd4dc05
chore: bump version to 0.21.0rc3 (#5309)
Signed-off-by: ZhanruiSunCh <184402041+ZhanruiSunCh@users.noreply.github.com>
2025-06-18 15:59:53 +08:00
..
_torch refactor: Introduce ResourceManagerType enum for resource management (#5246) 2025-06-18 09:55:59 +02:00
auto_parallel Release 0.20 to main (#4577) 2025-05-28 16:25:33 +08:00
bench chore: Mass integration of release/0.20 (#5082) 2025-06-17 14:32:02 +03:00
commands test: Add json_mode_eval for guided decoding evaluation (#5179) 2025-06-16 10:03:55 +08:00
evaluate test: Add json_mode_eval for guided decoding evaluation (#5179) 2025-06-16 10:03:55 +08:00
executor Re-implement LlmResponse in Python to reduce host overhead of pybind (#5224) 2025-06-17 21:28:09 +08:00
inputs feat: Basic skeleton for Gemma3 VLM (#5108) 2025-06-13 17:27:04 +08:00
layers refactoring: port customized kernels with public cutlass version (#5027) 2025-06-13 16:19:31 +08:00
llmapi chore: partition LLM class into TorchLLM and TrtLLM (#4900) 2025-06-18 14:01:25 +08:00
models feat: Add support for fp8 rowwise quantization (#4876) 2025-06-14 06:37:48 -07:00
plugin feat: Add support for fp8 rowwise quantization (#4876) 2025-06-14 06:37:48 -07:00
quantization feat: Add w4a8_mxfp4_fp8 quantization recipe. (#4867) 2025-06-16 11:30:57 +08:00
runtime chore: Mass integration of release/0.20 (#5082) 2025-06-17 14:32:02 +03:00
scaffolding chore [BREAKING CHANGE]: Flatten PyTorchConfig knobs into TorchLlmArgs (#4603) 2025-05-28 18:43:04 +08:00
serve feat: Add no_kv_cache_reuse option and streaming support for trtllm serve bench (#4971) 2025-06-18 13:37:31 +08:00
tools chore: Mass integration of release/0.20 (#4898) 2025-06-08 23:26:26 +08:00
__init__.py chore: Partition LlmArgs into TorchLlmArgs and TrtLlmArgs (#3823) 2025-05-22 09:40:56 +08:00
_common.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
_dlpack_utils.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
_ipc_utils.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
_mnnvl_utils.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
_utils.py [feat] Add llm args to tune python gc threshold (#5141) 2025-06-16 17:45:22 +08:00
builder.py fix: build_config in TorchLlmArgs and avoid arbitrary args (#4972) 2025-06-15 17:51:56 -07:00
disaggregated_params.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
functional.py Feat/ds r1 min latency opt round3, add router gemm, fused a gemm, PDL (#4560) 2025-06-14 17:36:22 +08:00
graph_rewriting.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
logger.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
lora_manager.py Enable trtllm-bench to run LoRA and add basic e2e perf testing capability for LoRA in PyT flow (#5130) 2025-06-15 18:54:04 +03:00
mapping.py fix: Fix moe_ep_groups/moe_cluster_groups in Mapping. (#4555) 2025-05-23 10:41:49 +08:00
module.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
network.py chore: remove usernames from comments (#3291) 2025-04-05 13:44:28 +08:00
parameter.py fix:https://nvbugs/5234033 enable starcoder trt-flow with transforme… (#3909) 2025-05-15 11:16:45 +08:00
profiler.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
prompt_adapter_manager.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
python_plugin.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
sampling_params.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
top_model_mixin.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
version.py chore: bump version to 0.21.0rc3 (#5309) 2025-06-18 15:59:53 +08:00