TensorRT-LLMs/examples
Lucas Liebenwein 937f8f78a1
[None][doc] promote AutoDeploy to beta feature in docs (#10372)
Signed-off-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com>
2026-01-02 18:46:31 -05:00
..
apps [None][chroe] Rename TensorRT-LLM to TensorRT LLM for source code. (#7851) 2025-09-25 21:02:35 +08:00
auto_deploy [None][doc] promote AutoDeploy to beta feature in docs (#10372) 2026-01-02 18:46:31 -05:00
bindings/executor [None][doc] Rename TensorRT-LLM to TensorRT LLM for homepage and the … (#7850) 2025-09-25 21:02:35 +08:00
configs [None] [doc] Update IFB performance guide & GPTOSS deployment guide (#10283) 2025-12-25 05:52:04 -05:00
cpp/executor [TRTLLM-9197][infra] Move thirdparty stuff to it's own listfile (#8986) 2025-11-20 16:44:23 -08:00
cpp_library [None][chroe] Rename TensorRT-LLM to TensorRT LLM for source code. (#7851) 2025-09-25 21:02:35 +08:00
disaggregated [None][docs] Add NIXL-Libfabric Usage to Documentation (#10205) 2025-12-23 23:05:40 -05:00
dora Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
draft_target_model [None][doc] Rename TensorRT-LLM to TensorRT LLM for homepage and the … (#7850) 2025-09-25 21:02:35 +08:00
eagle [None][chroe] Rename TensorRT-LLM to TensorRT LLM for source code. (#7851) 2025-09-25 21:02:35 +08:00
infinitebench Update TensorRT-LLM (#1725) 2024-06-04 20:26:32 +08:00
language_adapter [None][doc] Rename TensorRT-LLM to TensorRT LLM for homepage and the … (#7850) 2025-09-25 21:02:35 +08:00
layer_wise_benchmarks [TRTLLM-9615][feat] Implement a distributed tuning system (#9621) 2025-12-15 21:08:53 +08:00
llm-api [None][fix] Fix request_id for best_of/n case (#8368) 2025-12-26 22:20:24 +01:00
llm-eval/lm-eval-harness [TRTLLM-9065][chore] remove PyTorchConfig completely (#8856) 2025-11-06 22:37:03 -08:00
longbench [TRTLLM-9805][feat] Skip Softmax Attention. (#9821) 2025-12-21 02:52:42 -05:00
lookahead [None][doc] Rename TensorRT-LLM to TensorRT LLM for homepage and the … (#7850) 2025-09-25 21:02:35 +08:00
medusa [OMNIML-3036][doc] Re-branding TensorRT-Model-Optimizer as Nvidia Model-Optimizer (#9679) 2025-12-07 07:14:05 -08:00
models [None][feat] Support VLM part for Mistral Large 3 (#10188) 2025-12-25 11:20:58 -05:00
ngram [None][doc] Rename TensorRT-LLM to TensorRT LLM for homepage and the … (#7850) 2025-09-25 21:02:35 +08:00
openai_triton [None][chroe] Rename TensorRT-LLM to TensorRT LLM for source code. (#7851) 2025-09-25 21:02:35 +08:00
opentelemetry [None][chore] Change trt-server to trtlllm-server in opentelemetry readme (#9173) 2025-11-17 22:02:24 -08:00
python_plugin [None][doc] Rename TensorRT-LLM to TensorRT LLM for homepage and the … (#7850) 2025-09-25 21:02:35 +08:00
quantization [OMNIML-3036][doc] Re-branding TensorRT-Model-Optimizer as Nvidia Model-Optimizer (#9679) 2025-12-07 07:14:05 -08:00
ray_orchestrator [TRTC-102][docs] --extra_llm_api_options->--config in docs/examples/tests (#10005) 2025-12-19 13:48:43 -05:00
redrafter [None][chore] update torch_dtype -> dtype in 'transformers' (#8263) 2025-10-15 17:09:30 +09:00
sample_weight_stripping [None][chore] Weekly mass integration of release/1.1 -- rebase (#9522) 2025-11-29 21:48:48 +08:00
scaffolding [None][feat] Deep Research Implemented with Scaffolding (#8452) 2025-11-06 10:33:28 +08:00
serve [https://nvbugs/5747938][fix] Use local tokenizer (#10230) 2025-12-26 22:08:10 +08:00
sparse_attention [TRTC-102][docs] --extra_llm_api_options->--config in docs/examples/tests (#10005) 2025-12-19 13:48:43 -05:00
trtllm-eval test: Add LLGuidance test and refine guided decoding (#5348) 2025-06-25 14:12:56 +08:00
wide_ep [TRTC-102][docs] --extra_llm_api_options->--config in docs/examples/tests (#10005) 2025-12-19 13:48:43 -05:00
__init__.py [TRTC-102][docs] --extra_llm_api_options->--config in docs/examples/tests (#10005) 2025-12-19 13:48:43 -05:00
constraints.txt [None][chore] Bump version to 1.2.0rc7 (#10216) 2025-12-23 15:07:47 +08:00
eval_long_context.py [None][feat] Support ignored prompt length for penalties via new sampling config parameter (#8127) 2025-10-27 13:12:31 -04:00
generate_checkpoint_config.py [None][chroe] Rename TensorRT-LLM to TensorRT LLM for source code. (#7851) 2025-09-25 21:02:35 +08:00
generate_xgrammar_tokenizer_info.py Update TensorRT-LLM (#2783) 2025-02-13 18:40:22 +08:00
hf_lora_convert.py Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
mmlu.py [https://nvbugs/4141427][chore] Add more details to LICENSE file (#9881) 2025-12-13 08:35:31 +08:00
run.py [None][feat] Support ignored prompt length for penalties via new sampling config parameter (#8127) 2025-10-27 13:12:31 -04:00
summarize.py [None][feat] Support ignored prompt length for penalties via new sampling config parameter (#8127) 2025-10-27 13:12:31 -04:00
utils.py [https://nvbugs/5747930][fix] Use offline tokenizer for whisper models. (#10121) 2025-12-20 09:42:07 +08:00