TensorRT-LLMs/tensorrt_llm
Mike Iovine 25aa3881d7
[nvbug/5319281][fix] Stop drafting when we hit the draft model's max seq len (#4879)
Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com>
2025-06-13 11:06:36 -04:00
..
_torch [nvbug/5319281][fix] Stop drafting when we hit the draft model's max seq len (#4879) 2025-06-13 11:06:36 -04:00
auto_parallel Release 0.20 to main (#4577) 2025-05-28 16:25:33 +08:00
bench feat: add HyperCLOVAX-SEED-Vision support in refactored way (#4799) 2025-06-09 11:04:04 +08:00
commands fix:remove duplicated trust_remote_code knob from trtllm-serve (#5143) 2025-06-12 19:48:24 +08:00
evaluate Add llama4 disagg accuracy tests (#4336) 2025-05-19 21:55:08 +08:00
executor [fix]: Fall back to HMAC to Avoid IPC Serialization Churn (#5074) 2025-06-13 11:37:50 +08:00
inputs feat: Basic skeleton for Gemma3 VLM (#5108) 2025-06-13 17:27:04 +08:00
layers refactoring: port customized kernels with public cutlass version (#5027) 2025-06-13 16:19:31 +08:00
llmapi refactor [BREAKING CHANGE]:: remove the redundant use_kv_cache field from PytorchConfig (#5031) 2025-06-13 16:34:24 +08:00
models Solve underallocation in VSWA+/VGQA (#4667) 2025-06-12 12:12:46 +08:00
plugin Cherry pick feat/llama4 to main (#4739) 2025-05-30 05:28:40 +08:00
quantization Mxfp8xmxfp4 quant mode(#4978) 2025-06-10 22:01:37 +08:00
runtime [TRTLLM-5007][feat] Add multimodal hashing support (image hashing) (#4145) 2025-06-10 01:59:56 +08:00
scaffolding chore [BREAKING CHANGE]: Flatten PyTorchConfig knobs into TorchLlmArgs (#4603) 2025-05-28 18:43:04 +08:00
serve chore: gracefully exit disagg process in tests; better startup and logging (#5109) 2025-06-13 14:03:55 +08:00
tools chore: Mass integration of release/0.20 (#4898) 2025-06-08 23:26:26 +08:00
__init__.py chore: Partition LlmArgs into TorchLlmArgs and TrtLlmArgs (#3823) 2025-05-22 09:40:56 +08:00
_common.py Update (#2978) 2025-03-23 16:39:35 +08:00
_dlpack_utils.py feat: Add MNNVL MoE A2A support (#3504) 2025-04-25 17:29:08 +08:00
_ipc_utils.py fix: Proper error bubbling for PyExecutor (#3321) 2025-04-15 14:49:46 +08:00
_mnnvl_utils.py fix: Remove real size allocation (#4396) 2025-05-18 19:13:22 +08:00
_utils.py feat: Skip sampler for intermediate pp stages. (#4514) 2025-05-26 10:08:51 +08:00
builder.py Revert "fix: build_config in TorchLlmArgs and avoid invalid args" (#4949) 2025-06-05 17:43:30 +08:00
disaggregated_params.py Update TensorRT-LLM (#2936) 2025-03-18 21:25:19 +08:00
functional.py fix: Updates to yarn implementation (#5105) 2025-06-12 20:45:34 +08:00
graph_rewriting.py Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
logger.py perf: Fuse gemm setup function for SM90/SM100 MOE plugin path (#4146) 2025-05-21 10:00:36 +08:00
lora_manager.py add changes for fp8, nemotron-nas, API (#4180) 2025-05-18 23:27:25 +08:00
mapping.py fix: Fix moe_ep_groups/moe_cluster_groups in Mapping. (#4555) 2025-05-23 10:41:49 +08:00
module.py Update (#2978) 2025-03-23 16:39:35 +08:00
network.py chore: remove usernames from comments (#3291) 2025-04-05 13:44:28 +08:00
parameter.py fix:https://nvbugs/5234033 enable starcoder trt-flow with transforme… (#3909) 2025-05-15 11:16:45 +08:00
profiler.py test [TRTLLM-4477,TRTLLM-4481]: Accuracy test improvement (Part 3.5): Support GSM8K and GPQA (#3483) 2025-04-22 07:38:16 +08:00
prompt_adapter_manager.py Update TensorRT-LLM (#2333) 2024-10-15 15:28:40 +08:00
python_plugin.py refactor: use x is None instead of x == None. (#4244) 2025-05-15 20:00:04 +08:00
sampling_params.py [fix]: Fall back to HMAC to Avoid IPC Serialization Churn (#5074) 2025-06-13 11:37:50 +08:00
top_model_mixin.py Update TensorRT-LLM (#2053) 2024-07-30 21:25:01 +08:00
version.py chore: bump version to 0.21.0rc2 (#5112) 2025-06-11 15:08:14 +08:00