TensorRT-LLMs/examples
Cao Dong 2ff772ef71
[None][feat] Add benchmark to DeepConf (#8776)
Signed-off-by: Dong Cao <docao@nvidia.com>
2025-11-03 16:05:50 +08:00
..
apps [None][chroe] Rename TensorRT-LLM to TensorRT LLM for source code. (#7851) 2025-09-25 21:02:35 +08:00
auto_deploy [#8245][feat] Autodeploy: Guided Decoding Support (#8551) 2025-10-28 09:29:57 +08:00
bindings/executor [None][doc] Rename TensorRT-LLM to TensorRT LLM for homepage and the … (#7850) 2025-09-25 21:02:35 +08:00
cpp/executor [None][doc] Rename TensorRT-LLM to TensorRT LLM for homepage and the … (#7850) 2025-09-25 21:02:35 +08:00
cpp_library [None][chroe] Rename TensorRT-LLM to TensorRT LLM for source code. (#7851) 2025-09-25 21:02:35 +08:00
disaggregated [TRTLLM-8431][doc] update public doc and example, add etcd auto-scaling tests (#8602) 2025-10-28 17:04:53 -07:00
dora Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
draft_target_model [None][doc] Rename TensorRT-LLM to TensorRT LLM for homepage and the … (#7850) 2025-09-25 21:02:35 +08:00
eagle [None][chroe] Rename TensorRT-LLM to TensorRT LLM for source code. (#7851) 2025-09-25 21:02:35 +08:00
infinitebench Update TensorRT-LLM (#1725) 2024-06-04 20:26:32 +08:00
language_adapter [None][doc] Rename TensorRT-LLM to TensorRT LLM for homepage and the … (#7850) 2025-09-25 21:02:35 +08:00
layer_wise_benchmarks [None][fix] Layer wise benchmarks: use local models, lint (#8799) 2025-10-30 09:47:46 -07:00
llm-api [TRTLLM-7159][docs] Add documentation for additional outputs (#8325) 2025-10-27 09:52:04 +01:00
llm-eval/lm-eval-harness chore: update doc by replacing use_cuda_graph with cuda_graph_config (#5680) 2025-07-04 15:39:15 +09:00
longbench [TRTLLM-8535][feat] Support DeepSeek V3.2 with FP8 + BF16 KV cache/NVFP4 + BF16 KV cache (#8405) 2025-10-24 13:40:41 -04:00
lookahead [None][doc] Rename TensorRT-LLM to TensorRT LLM for homepage and the … (#7850) 2025-09-25 21:02:35 +08:00
medusa [None][chore] update torch_dtype -> dtype in 'transformers' (#8263) 2025-10-15 17:09:30 +09:00
models [None][feat] Use ruff for formatting and linting new files by default (#8629) 2025-11-01 16:11:40 +01:00
ngram [None][doc] Rename TensorRT-LLM to TensorRT LLM for homepage and the … (#7850) 2025-09-25 21:02:35 +08:00
openai_triton [None][chroe] Rename TensorRT-LLM to TensorRT LLM for source code. (#7851) 2025-09-25 21:02:35 +08:00
opentelemetry [None][feat] Add opentelemetry tracing (#5897) 2025-10-27 18:51:07 +08:00
python_plugin [None][doc] Rename TensorRT-LLM to TensorRT LLM for homepage and the … (#7850) 2025-09-25 21:02:35 +08:00
quantization [None][doc] Rename TensorRT-LLM to TensorRT LLM for homepage and the … (#7850) 2025-09-25 21:02:35 +08:00
ray_orchestrator [None][chore] Use a cached model path for Ray integration test (#8660) 2025-10-27 19:16:06 -07:00
redrafter [None][chore] update torch_dtype -> dtype in 'transformers' (#8263) 2025-10-15 17:09:30 +09:00
sample_weight_stripping [None][doc] Rename TensorRT-LLM to TensorRT LLM for homepage and the … (#7850) 2025-09-25 21:02:35 +08:00
scaffolding [None][feat] Add benchmark to DeepConf (#8776) 2025-11-03 16:05:50 +08:00
serve [TRTLLM-8737][feat] Support media_io_kwargs on trtllm-serve (#8528) 2025-10-24 12:53:40 -04:00
trtllm-eval test: Add LLGuidance test and refine guided decoding (#5348) 2025-06-25 14:12:56 +08:00
wide_ep [None][chore] Move submit.sh to python and use yaml configuration (#8003) 2025-10-20 22:36:50 -04:00
constraints.txt [None][chore] Bump version to 1.2.0rc2 (#8562) 2025-10-22 14:35:05 +08:00
eval_long_context.py [None][feat] Support ignored prompt length for penalties via new sampling config parameter (#8127) 2025-10-27 13:12:31 -04:00
generate_checkpoint_config.py [None][chroe] Rename TensorRT-LLM to TensorRT LLM for source code. (#7851) 2025-09-25 21:02:35 +08:00
generate_xgrammar_tokenizer_info.py Update TensorRT-LLM (#2783) 2025-02-13 18:40:22 +08:00
hf_lora_convert.py Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
mmlu.py [None][chore] update torch_dtype -> dtype in 'transformers' (#8263) 2025-10-15 17:09:30 +09:00
run.py [None][feat] Support ignored prompt length for penalties via new sampling config parameter (#8127) 2025-10-27 13:12:31 -04:00
summarize.py [None][feat] Support ignored prompt length for penalties via new sampling config parameter (#8127) 2025-10-27 13:12:31 -04:00
utils.py [None][feat] Support ignored prompt length for penalties via new sampling config parameter (#8127) 2025-10-27 13:12:31 -04:00