TensorRT-LLMs/tensorrt_llm
Yuxian Qiu bf691b3d28
feat: support packed weights in vanilla moe (#4719)
Signed-off-by: Yuxian Qiu <142763828+yuxianq@users.noreply.github.com>
2025-05-29 06:24:24 +08:00
..
_torch feat: support packed weights in vanilla moe (#4719) 2025-05-29 06:24:24 +08:00
auto_parallel Release 0.20 to main (#4577) 2025-05-28 16:25:33 +08:00
bench chore [BREAKING CHANGE]: Flatten PyTorchConfig knobs into TorchLlmArgs (#4603) 2025-05-28 18:43:04 +08:00
commands chore [BREAKING CHANGE]: Flatten PyTorchConfig knobs into TorchLlmArgs (#4603) 2025-05-28 18:43:04 +08:00
evaluate Add llama4 disagg accuracy tests (#4336) 2025-05-19 21:55:08 +08:00
executor chore [BREAKING CHANGE]: Flatten PyTorchConfig knobs into TorchLlmArgs (#4603) 2025-05-28 18:43:04 +08:00
inputs fix: Move cv2 import to load_video function (#4541) 2025-05-22 17:56:07 +02:00
layers refactor: use x is None instead of x == None. (#4244) 2025-05-15 20:00:04 +08:00
llmapi chore [BREAKING CHANGE]: Flatten PyTorchConfig knobs into TorchLlmArgs (#4603) 2025-05-28 18:43:04 +08:00
models [NVBUG 5301980] Fix fp4 gemm padding. (#4662) 2025-05-27 11:30:53 +08:00
plugin feat: Low Precision Allreduce for PCIe based GPU (#4344) 2025-05-20 06:53:46 +08:00
quantization [NVBUG 5301980] Fix fp4 gemm padding. (#4662) 2025-05-27 11:30:53 +08:00
runtime fix: [nvbugs/5287097] Align PP layer distribution between pytorch and TRT flow. (#4399) 2025-05-19 14:25:36 -07:00
scaffolding chore [BREAKING CHANGE]: Flatten PyTorchConfig knobs into TorchLlmArgs (#4603) 2025-05-28 18:43:04 +08:00
serve [TRTLLM-1658][feat] Enable multiple response in trtllm-serve for TRT backend (#4623) 2025-05-28 11:36:44 +08:00
tools feat: Support Mistral Small 3.1 24B VLM in TRT workflow (#4183) 2025-05-14 03:47:22 +08:00
__init__.py chore: Partition LlmArgs into TorchLlmArgs and TrtLlmArgs (#3823) 2025-05-22 09:40:56 +08:00
_common.py Update (#2978) 2025-03-23 16:39:35 +08:00
_dlpack_utils.py feat: Add MNNVL MoE A2A support (#3504) 2025-04-25 17:29:08 +08:00
_ipc_utils.py fix: Proper error bubbling for PyExecutor (#3321) 2025-04-15 14:49:46 +08:00
_mnnvl_utils.py fix: Remove real size allocation (#4396) 2025-05-18 19:13:22 +08:00
_utils.py feat: Skip sampler for intermediate pp stages. (#4514) 2025-05-26 10:08:51 +08:00
builder.py chore: remove usernames from comments (#3291) 2025-04-05 13:44:28 +08:00
disaggregated_params.py Update TensorRT-LLM (#2936) 2025-03-18 21:25:19 +08:00
functional.py feat: Low Precision Allreduce for PCIe based GPU (#4344) 2025-05-20 06:53:46 +08:00
graph_rewriting.py Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
logger.py perf: Fuse gemm setup function for SM90/SM100 MOE plugin path (#4146) 2025-05-21 10:00:36 +08:00
lora_manager.py add changes for fp8, nemotron-nas, API (#4180) 2025-05-18 23:27:25 +08:00
mapping.py fix: Fix moe_ep_groups/moe_cluster_groups in Mapping. (#4555) 2025-05-23 10:41:49 +08:00
module.py Update (#2978) 2025-03-23 16:39:35 +08:00
network.py chore: remove usernames from comments (#3291) 2025-04-05 13:44:28 +08:00
parameter.py fix:https://nvbugs/5234033 enable starcoder trt-flow with transforme… (#3909) 2025-05-15 11:16:45 +08:00
profiler.py test [TRTLLM-4477,TRTLLM-4481]: Accuracy test improvement (Part 3.5): Support GSM8K and GPQA (#3483) 2025-04-22 07:38:16 +08:00
prompt_adapter_manager.py Update TensorRT-LLM (#2333) 2024-10-15 15:28:40 +08:00
python_plugin.py refactor: use x is None instead of x == None. (#4244) 2025-05-15 20:00:04 +08:00
sampling_params.py fix: [nvbugs/5066257] serialization improvments (#3869) 2025-05-23 13:06:29 +08:00
top_model_mixin.py Update TensorRT-LLM (#2053) 2024-07-30 21:25:01 +08:00
version.py chore: bump version to 0.21.0rc0 (#4465) 2025-05-20 12:19:50 +08:00