TensorRT-LLMs/tensorrt_llm
yifeizhang-c 5959d72d74
[https://nvbugs/5394392][fix] Enlarge scheduler capacity under disagg bs == 1 (#6975)
Signed-off-by: Yifei Zhang <219273404+yifeizhang-c@users.noreply.github.com>
2025-08-20 16:32:27 +08:00
..
_tensorrt_engine
_torch [https://nvbugs/5394392][fix] Enlarge scheduler capacity under disagg bs == 1 (#6975) 2025-08-20 16:32:27 +08:00
auto_parallel
bench [https://nvbugs/5453667] [fix] reverting a breaking change: make trtllm-bench enable_chunked_context defaults backend-dependent (#6956) 2025-08-16 00:29:02 -04:00
commands [TRTLLM-6674][feat] (Breaking Change) Hopper SWA non-cyclic kernels + KV reuse + Spec Dec (#6379) 2025-08-05 07:47:41 +00:00
evaluate
executor [https://nvbugs/5383702][fix] error propagation in GenerationExecutor (#6793) 2025-08-12 12:28:06 +08:00
inputs [TRTLLM-6654][feat] Add support for external multimodal embeddings (#6263) 2025-07-30 10:00:15 -04:00
layers feat: TRTLLM-6450 update long rope for phi3.5/phi4-mini/phi4-mm (#6353) 2025-07-30 09:20:16 -07:00
llmapi [None][doc] add status labels to LLM class's api reference (#6899) 2025-08-19 21:50:04 -04:00
models [None][feat] Add Qwen3 MoE support to TensorRT backend (#6470) 2025-08-06 17:02:35 +08:00
plugin
quantization Deepseek R1 FP8 Support on Blackwell (#6486) 2025-08-01 10:26:28 +08:00
runtime [nvbug/5374773] chore: Add a runtime flag to enable fail fast when attn window is too large to fit at least one sequence in KV cache (#5974) 2025-07-25 18:10:40 -04:00
scaffolding [https://nvbugs/5387375] fix(scaffolding): fix scaffolding aime test in test_e2e (#6140) 2025-07-18 10:34:37 +08:00
serve [TRTLLM-6675][infra] Nixl test completion (#6623) 2025-08-08 10:15:54 +08:00
tools [https://nvbugs/5429689][fix] Fix mllama model structure update with transformers issue (#6699) 2025-08-11 10:48:35 +08:00
__init__.py
_common.py
_dlpack_utils.py
_ipc_utils.py
_mnnvl_utils.py [NvBug 5378370] fix: Fix alltoall for llama4 (apply_router_weight_on_input=True) (#5902) 2025-07-12 15:50:31 +09:00
_utils.py [TRTLLM-6683][feat] Support LoRA reload CPU cache evicted adapter (#6786) 2025-08-11 14:31:39 -04:00
builder.py feat: nanobind bindings (#6185) 2025-07-21 08:56:57 +01:00
disaggregated_params.py [fix]: Skip prompt length checking for generation only requests (#6146) 2025-07-19 21:26:37 +08:00
functional.py feat: TRTLLM-6450 update long rope for phi3.5/phi4-mini/phi4-mm (#6353) 2025-07-30 09:20:16 -07:00
graph_rewriting.py
logger.py
lora_manager.py [TRTLLM-6611][feat] Add warnings and stricter validation to LoraManager adapter loading (#6453) 2025-07-31 22:22:51 -04:00
mapping.py
math_utils.py
module.py
network.py
parameter.py
profiler.py
prompt_adapter_manager.py
python_plugin.py
sampling_params.py [TRTLLM-6761][refactor] Replace LogitBiasLogitsProcessor with embedding bias tensor system (#6464) 2025-08-05 07:14:24 -07:00
scheduling_params.py [None][feat] Add support of scheduling attention dp request (#6246) 2025-08-01 20:38:01 -04:00
serialization.py
top_model_mixin.py
version.py [None][chore] Bump version to 1.0.0 (#6652) 2025-08-07 14:15:34 +08:00