TensorRT-LLMs/tensorrt_llm
Yuxian Qiu 04b112651b
[None][feat] Hang detection for executor loop and worker. (#10480)
Signed-off-by: Yuxian Qiu <142763828+yuxianq@users.noreply.github.com>
2026-01-13 02:34:32 -05:00
..
_tensorrt_engine
_torch [None][feat] Hang detection for executor loop and worker. (#10480) 2026-01-13 02:34:32 -05:00
bench [None][chore] Print correct backend name in benchmark report (#10597) 2026-01-12 14:46:00 -05:00
commands [None][chore] remove redundant retries while binding to arbitrary port (#10452) 2026-01-06 10:39:15 -05:00
evaluate [None][fix] Mistral large 3 few code refine (#10405) 2026-01-08 06:38:49 -05:00
executor [None][feat] Hang detection for executor loop and worker. (#10480) 2026-01-13 02:34:32 -05:00
inputs [https://nvbugs/5752687][fix] Choose register model config over root config for VLM (#10553) 2026-01-09 12:10:52 -05:00
layers [None][fix] [Gemma3] Fix RoPE for local attention for Gemma3 (#9961) 2025-12-27 11:50:59 -08:00
llmapi [TRTLLM-9522][fix] broken cast (#9975) 2026-01-08 06:47:39 -05:00
metrics [None][feat] Add trtllm_ prefix for exposed metrics (#8845) 2025-11-06 15:27:18 +08:00
models [TRTLLM-9465][fix] Swap TP-CP grouping order (#10350) 2026-01-05 20:08:03 +08:00
plugin [None][chore] Revert "[None][fix] change allreduce workspace dtype to torch.int64 t… (#9538) 2025-11-28 16:45:23 +08:00
quantization [None][feat] sm100 weight-only kernel (#10190) 2026-01-05 09:44:36 +08:00
runtime [#6425][fix] address CUDA stream sync issue in ModelRunnerCPP (#6426) 2025-12-12 13:33:22 +08:00
scaffolding
serve [None][chore] Unify DS tool parser names (#10239) 2025-12-31 14:40:07 +08:00
tokenizer [https://nvbugs/5684820][fix] fix the detokenizer issue for DeepSeek-v3.2 (#10106) 2025-12-22 10:56:33 +08:00
tools [None][feat] Layer-wise benchmarks: support TEP balance, polish slurm scripts (#10237) 2026-01-05 11:23:04 +08:00
__init__.py [TRTLLM-9736][feat] AsyncLLM and verl integ (#9353) 2025-12-11 09:33:25 -08:00
_common.py [None][feat] Hang detection for executor loop and worker. (#10480) 2026-01-13 02:34:32 -05:00
_dlpack_utils.py
_ipc_utils.py [None][chore] Modify python ipc_util to align with C++ path (#9894) 2025-12-12 15:55:22 +08:00
_mnnvl_utils.py [TRTLLM-9493][feat] Custom AllToAll for helix parallelism (#9986) 2025-12-23 18:14:30 -08:00
_ray_utils.py
_utils.py [None][feat] Hang detection for executor loop and worker. (#10480) 2026-01-13 02:34:32 -05:00
builder.py
disaggregated_params.py [TRTLLM-9409][feat] Pass MRoPE tensors for EPD disagg (#9758) 2025-12-22 06:32:49 -05:00
functional.py [#8921][feat] Added symetric memory AllReduce strategy (#8919) 2025-12-08 13:12:56 -08:00
graph_rewriting.py
logger.py
lora_helper.py
lora_manager.py
mapping.py [TRTLLM-9465][fix] Swap TP-CP grouping order (#10350) 2026-01-05 20:08:03 +08:00
math_utils.py
module.py
network.py
parameter.py
profiler.py
prompt_adapter_manager.py
python_plugin.py
ray_stub.py
sampling_params.py [None][feat] add the eos tokens in generation config to stop words in the sampler (#10389) 2026-01-06 09:24:03 +08:00
scheduling_params.py
serialization.py
top_model_mixin.py
version.py [None][chore] Bump version to 1.2.0rc8 (#10542) 2026-01-08 04:51:44 -05:00