TensorRT-LLMs/tensorrt_llm
Hanjun Cho 80f918cc22
[None][feat] Add Qwen3 MoE support to TensorRT backend (#6470)
Signed-off-by: gkswns0531 <gkswns0531@gmail.com>
Signed-off-by: hanjuncho <gkswns0531@gmail.com>
Co-authored-by: bhsueh_NV <11360707+byshiue@users.noreply.github.com>
2025-08-06 17:02:35 +08:00
..
_tensorrt_engine [TRTLLM-5208][BREAKING CHANGE] chore: make pytorch LLM the default (#5312) 2025-06-20 03:01:10 +08:00
_torch [https://nvbugs/5433581][fix] DeepGEMM installation on SBSA (#6588) 2025-08-06 16:44:21 +08:00
auto_parallel [TRTLLM-4971]: Use safe deserialization in ParallelConfig (#4630) 2025-06-27 09:58:41 +08:00
bench [https://nvbugs/5355007][fix] Set enable_chunked_context as True by default in trtllm bench (#6582) 2025-08-05 11:11:36 -07:00
commands [TRTLLM-6674][feat] (Breaking Change) Hopper SWA non-cyclic kernels + KV reuse + Spec Dec (#6379) 2025-08-05 07:47:41 +00:00
evaluate test: Add LLGuidance test and refine guided decoding (#5348) 2025-06-25 14:12:56 +08:00
executor [TRTLLM-5508][feat] check input tokens + improve error handling (#5170) 2025-08-05 18:27:43 +01:00
inputs [TRTLLM-6654][feat] Add support for external multimodal embeddings (#6263) 2025-07-30 10:00:15 -04:00
layers feat: TRTLLM-6450 update long rope for phi3.5/phi4-mini/phi4-mm (#6353) 2025-07-30 09:20:16 -07:00
llmapi [None][opt] ADP schedule balance optimization (#6061) 2025-08-06 09:38:02 +08:00
models [None][feat] Add Qwen3 MoE support to TensorRT backend (#6470) 2025-08-06 17:02:35 +08:00
plugin feat: Add support for fp8 rowwise quantization (#4876) 2025-06-14 06:37:48 -07:00
quantization Deepseek R1 FP8 Support on Blackwell (#6486) 2025-08-01 10:26:28 +08:00
runtime [nvbug/5374773] chore: Add a runtime flag to enable fail fast when attn window is too large to fit at least one sequence in KV cache (#5974) 2025-07-25 18:10:40 -04:00
scaffolding [https://nvbugs/5387375] fix(scaffolding): fix scaffolding aime test in test_e2e (#6140) 2025-07-18 10:34:37 +08:00
serve [TRTLLM-6761][refactor] Replace LogitBiasLogitsProcessor with embedding bias tensor system (#6464) 2025-08-05 07:14:24 -07:00
tools [5385981] fix: Update the usage of VisionAttention init API. (#6413) 2025-07-29 16:41:48 +08:00
__init__.py feat: TRTLLM-5941 Upgrade xgrammar to 0.1.18 (#5364) 2025-07-01 20:12:55 +08:00
_common.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
_dlpack_utils.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
_ipc_utils.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
_mnnvl_utils.py [NvBug 5378370] fix: Fix alltoall for llama4 (apply_router_weight_on_input=True) (#5902) 2025-07-12 15:50:31 +09:00
_utils.py [TRTLLM-6826][feat] Allow sending more than 2GiB through MPI by using mpi4py.util.pkl5 (#6522) 2025-08-05 11:28:26 +03:00
builder.py feat: nanobind bindings (#6185) 2025-07-21 08:56:57 +01:00
disaggregated_params.py [fix]: Skip prompt length checking for generation only requests (#6146) 2025-07-19 21:26:37 +08:00
functional.py feat: TRTLLM-6450 update long rope for phi3.5/phi4-mini/phi4-mm (#6353) 2025-07-30 09:20:16 -07:00
graph_rewriting.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
logger.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
lora_manager.py [TRTLLM-6611][feat] Add warnings and stricter validation to LoraManager adapter loading (#6453) 2025-07-31 22:22:51 -04:00
mapping.py fix: Mapping rank boundary check bug (#4935) 2025-06-27 07:27:59 +08:00
math_utils.py perf: Optimize swizzle_sf, unswizzle_sf, reswizzle_sf (#5318) 2025-06-26 14:03:56 +08:00
module.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
network.py chore: remove usernames from comments (#3291) 2025-04-05 13:44:28 +08:00
parameter.py fix:https://nvbugs/5234033 enable starcoder trt-flow with transforme… (#3909) 2025-05-15 11:16:45 +08:00
profiler.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
prompt_adapter_manager.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
python_plugin.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
sampling_params.py [TRTLLM-6761][refactor] Replace LogitBiasLogitsProcessor with embedding bias tensor system (#6464) 2025-08-05 07:14:24 -07:00
scheduling_params.py [None][feat] Add support of scheduling attention dp request (#6246) 2025-08-01 20:38:01 -04:00
serialization.py [TRTLLM-4971]: Use safe deserialization in ParallelConfig (#4630) 2025-06-27 09:58:41 +08:00
top_model_mixin.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
version.py [None][chore] Bump version to 1.0.0rc6 (#6597) 2025-08-04 04:39:15 -04:00