TensorRT-LLMs/examples
Zero Zeng c6cce398f5
[TRTLLM-9053][feat] Support accuracy test and install from wheel (#9038)
Signed-off-by: Zero Zeng <38289304+zerollzeng@users.noreply.github.com>
2025-11-13 23:34:47 -08:00
..
apps [None][chroe] Rename TensorRT-LLM to TensorRT LLM for source code. (#7851) 2025-09-25 21:02:35 +08:00
auto_deploy [#8763][feature] AutoDeploy: configurable dtype for caching (#8812) 2025-11-10 22:17:14 -08:00
bindings/executor [None][doc] Rename TensorRT-LLM to TensorRT LLM for homepage and the … (#7850) 2025-09-25 21:02:35 +08:00
configs [TRTLLM-8680][doc] Add table with one-line deployment commands to docs (#8173) 2025-11-03 17:42:41 -08:00
cpp/executor [None][doc] Rename TensorRT-LLM to TensorRT LLM for homepage and the … (#7850) 2025-09-25 21:02:35 +08:00
cpp_library [None][chroe] Rename TensorRT-LLM to TensorRT LLM for source code. (#7851) 2025-09-25 21:02:35 +08:00
disaggregated [TRTLLM-9053][feat] Support accuracy test and install from wheel (#9038) 2025-11-13 23:34:47 -08:00
dora Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
draft_target_model [None][doc] Rename TensorRT-LLM to TensorRT LLM for homepage and the … (#7850) 2025-09-25 21:02:35 +08:00
eagle [None][chroe] Rename TensorRT-LLM to TensorRT LLM for source code. (#7851) 2025-09-25 21:02:35 +08:00
infinitebench Update TensorRT-LLM (#1725) 2024-06-04 20:26:32 +08:00
language_adapter [None][doc] Rename TensorRT-LLM to TensorRT LLM for homepage and the … (#7850) 2025-09-25 21:02:35 +08:00
layer_wise_benchmarks [None][feat] Add Qwen3-Next to layer-wise benchmarks (#9065) 2025-11-14 10:03:00 +08:00
llm-api [None] [feat] Use triton kernels for RocketKV prediction module (#8682) 2025-11-13 18:51:09 -08:00
llm-eval/lm-eval-harness [TRTLLM-9065][chore] remove PyTorchConfig completely (#8856) 2025-11-06 22:37:03 -08:00
longbench [None] [feat] Use triton kernels for RocketKV prediction module (#8682) 2025-11-13 18:51:09 -08:00
lookahead [None][doc] Rename TensorRT-LLM to TensorRT LLM for homepage and the … (#7850) 2025-09-25 21:02:35 +08:00
medusa [None][chore] update torch_dtype -> dtype in 'transformers' (#8263) 2025-10-15 17:09:30 +09:00
models [None][doc] Add DeepSeek-V3.2-Exp document (#9141) 2025-11-13 22:01:58 -08:00
ngram [None][doc] Rename TensorRT-LLM to TensorRT LLM for homepage and the … (#7850) 2025-09-25 21:02:35 +08:00
openai_triton [None][chroe] Rename TensorRT-LLM to TensorRT LLM for source code. (#7851) 2025-09-25 21:02:35 +08:00
opentelemetry [None][feat] Add opentelemetry tracing (#5897) 2025-10-27 18:51:07 +08:00
python_plugin [None][doc] Rename TensorRT-LLM to TensorRT LLM for homepage and the … (#7850) 2025-09-25 21:02:35 +08:00
quantization [None][doc] Rename TensorRT-LLM to TensorRT LLM for homepage and the … (#7850) 2025-09-25 21:02:35 +08:00
ray_orchestrator [None][chore] Use cached model in all ray tests (#8962) 2025-11-06 15:14:15 +01:00
redrafter [None][chore] update torch_dtype -> dtype in 'transformers' (#8263) 2025-10-15 17:09:30 +09:00
sample_weight_stripping [None][doc] Rename TensorRT-LLM to TensorRT LLM for homepage and the … (#7850) 2025-09-25 21:02:35 +08:00
scaffolding [None][feat] Deep Research Implemented with Scaffolding (#8452) 2025-11-06 10:33:28 +08:00
serve [TRTLLM-8737][feat] Support media_io_kwargs on trtllm-serve (#8528) 2025-10-24 12:53:40 -04:00
trtllm-eval test: Add LLGuidance test and refine guided decoding (#5348) 2025-06-25 14:12:56 +08:00
wide_ep [TRTLLM-9053][feat] Support accuracy test and install from wheel (#9038) 2025-11-13 23:34:47 -08:00
constraints.txt [None][chore] Bump version to 1.2.0rc3 (#9004) 2025-11-07 01:24:32 -08:00
eval_long_context.py [None][feat] Support ignored prompt length for penalties via new sampling config parameter (#8127) 2025-10-27 13:12:31 -04:00
generate_checkpoint_config.py [None][chroe] Rename TensorRT-LLM to TensorRT LLM for source code. (#7851) 2025-09-25 21:02:35 +08:00
generate_xgrammar_tokenizer_info.py Update TensorRT-LLM (#2783) 2025-02-13 18:40:22 +08:00
hf_lora_convert.py Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
mmlu.py [None][chore] update torch_dtype -> dtype in 'transformers' (#8263) 2025-10-15 17:09:30 +09:00
run.py [None][feat] Support ignored prompt length for penalties via new sampling config parameter (#8127) 2025-10-27 13:12:31 -04:00
summarize.py [None][feat] Support ignored prompt length for penalties via new sampling config parameter (#8127) 2025-10-27 13:12:31 -04:00
utils.py [None][feat] Support ignored prompt length for penalties via new sampling config parameter (#8127) 2025-10-27 13:12:31 -04:00