..
apps
[None][chroe] Rename TensorRT-LLM to TensorRT LLM for source code. ( #7851 )
2025-09-25 21:02:35 +08:00
auto_deploy
[ #8245 ][feat] Autodeploy: Guided Decoding Support ( #8551 )
2025-10-28 09:29:57 +08:00
bindings /executor
[None][doc] Rename TensorRT-LLM to TensorRT LLM for homepage and the … ( #7850 )
2025-09-25 21:02:35 +08:00
cpp /executor
[None][doc] Rename TensorRT-LLM to TensorRT LLM for homepage and the … ( #7850 )
2025-09-25 21:02:35 +08:00
cpp_library
[None][chroe] Rename TensorRT-LLM to TensorRT LLM for source code. ( #7851 )
2025-09-25 21:02:35 +08:00
disaggregated
[ https://nvbugs/5429636 ][feat] Kv transfer timeout ( #8459 )
2025-10-22 09:29:02 -04:00
dora
Update TensorRT-LLM ( #2755 )
2025-02-11 03:01:00 +00:00
draft_target_model
[None][doc] Rename TensorRT-LLM to TensorRT LLM for homepage and the … ( #7850 )
2025-09-25 21:02:35 +08:00
eagle
[None][chroe] Rename TensorRT-LLM to TensorRT LLM for source code. ( #7851 )
2025-09-25 21:02:35 +08:00
infinitebench
Update TensorRT-LLM ( #1725 )
2024-06-04 20:26:32 +08:00
language_adapter
[None][doc] Rename TensorRT-LLM to TensorRT LLM for homepage and the … ( #7850 )
2025-09-25 21:02:35 +08:00
llm-api
[TRTLLM-7159][docs] Add documentation for additional outputs ( #8325 )
2025-10-27 09:52:04 +01:00
llm-eval /lm-eval-harness
chore: update doc by replacing use_cuda_graph with cuda_graph_config ( #5680 )
2025-07-04 15:39:15 +09:00
longbench
[TRTLLM-8535][feat] Support DeepSeek V3.2 with FP8 + BF16 KV cache/NVFP4 + BF16 KV cache ( #8405 )
2025-10-24 13:40:41 -04:00
lookahead
[None][doc] Rename TensorRT-LLM to TensorRT LLM for homepage and the … ( #7850 )
2025-09-25 21:02:35 +08:00
medusa
[None][chore] update torch_dtype -> dtype in 'transformers' ( #8263 )
2025-10-15 17:09:30 +09:00
models
[TRTLLM-8682][chore] Remove auto_parallel module ( #8329 )
2025-10-22 20:53:08 -04:00
ngram
[None][doc] Rename TensorRT-LLM to TensorRT LLM for homepage and the … ( #7850 )
2025-09-25 21:02:35 +08:00
openai_triton
[None][chroe] Rename TensorRT-LLM to TensorRT LLM for source code. ( #7851 )
2025-09-25 21:02:35 +08:00
opentelemetry
[None][feat] Add opentelemetry tracing ( #5897 )
2025-10-27 18:51:07 +08:00
python_plugin
[None][doc] Rename TensorRT-LLM to TensorRT LLM for homepage and the … ( #7850 )
2025-09-25 21:02:35 +08:00
quantization
[None][doc] Rename TensorRT-LLM to TensorRT LLM for homepage and the … ( #7850 )
2025-09-25 21:02:35 +08:00
ray_orchestrator
[None][doc] Ray orchestrator initial doc ( #8373 )
2025-10-14 21:17:57 -07:00
redrafter
[None][chore] update torch_dtype -> dtype in 'transformers' ( #8263 )
2025-10-15 17:09:30 +09:00
sample_weight_stripping
[None][doc] Rename TensorRT-LLM to TensorRT LLM for homepage and the … ( #7850 )
2025-09-25 21:02:35 +08:00
scaffolding
[None][feat] Dev DeepConf ( #8362 )
2025-10-16 11:01:31 +08:00
serve
[TRTLLM-8737][feat] Support media_io_kwargs on trtllm-serve ( #8528 )
2025-10-24 12:53:40 -04:00
trtllm-eval
test: Add LLGuidance test and refine guided decoding ( #5348 )
2025-06-25 14:12:56 +08:00
wide_ep
[None][chore] Move submit.sh to python and use yaml configuration ( #8003 )
2025-10-20 22:36:50 -04:00
constraints.txt
[None][chore] Bump version to 1.2.0rc2 ( #8562 )
2025-10-22 14:35:05 +08:00
eval_long_context.py
[None][feat] Support ignored prompt length for penalties via new sampling config parameter ( #8127 )
2025-10-27 13:12:31 -04:00
generate_checkpoint_config.py
[None][chroe] Rename TensorRT-LLM to TensorRT LLM for source code. ( #7851 )
2025-09-25 21:02:35 +08:00
generate_xgrammar_tokenizer_info.py
Update TensorRT-LLM ( #2783 )
2025-02-13 18:40:22 +08:00
hf_lora_convert.py
Update TensorRT-LLM ( #2755 )
2025-02-11 03:01:00 +00:00
mmlu.py
[None][chore] update torch_dtype -> dtype in 'transformers' ( #8263 )
2025-10-15 17:09:30 +09:00
run.py
[None][feat] Support ignored prompt length for penalties via new sampling config parameter ( #8127 )
2025-10-27 13:12:31 -04:00
summarize.py
[None][feat] Support ignored prompt length for penalties via new sampling config parameter ( #8127 )
2025-10-27 13:12:31 -04:00
utils.py
[None][feat] Support ignored prompt length for penalties via new sampling config parameter ( #8127 )
2025-10-27 13:12:31 -04:00