TensorRT-LLMs/examples
amitz-nv 1ee7a08d2b
[5830][feat] Improve LoRA cache memory control (#6220)
Signed-off-by: Amit Zuker <203509407+amitz-nv@users.noreply.github.com>
2025-07-31 09:26:38 +03:00
..
apps [TRTLLM-5208][BREAKING CHANGE] chore: make pytorch LLM the default (#5312) 2025-06-20 03:01:10 +08:00
auto_deploy [AutoDeploy] merge feat/ad-2025-07-07 (#6196) 2025-07-23 05:11:04 +08:00
bindings/executor Update TensorRT-LLM (#2582) 2024-12-16 21:50:47 -08:00
cpp/executor [TRTLLM-4770][feat] Enhance cpp executor cmake to listen to ENABLE_MU… (#5104) 2025-07-11 10:59:44 +08:00
cpp_library Update TensorRT-LLM (#1274) 2024-03-12 18:15:52 +08:00
disaggregated chore: update trtllm-serve usage doc by removing backend parameter when it use torch as backend. (#6419) 2025-07-30 11:11:06 -04:00
dora Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
draft_target_model Fix: draft target README and set exclude_input_in_output to False (#4882) 2025-06-03 23:45:02 -07:00
eagle [refactor] Simplification of Speculative decoding configs (#5639) 2025-07-10 11:37:30 -04:00
infinitebench Update TensorRT-LLM (#1725) 2024-06-04 20:26:32 +08:00
language_adapter Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
llm-api [5830][feat] Improve LoRA cache memory control (#6220) 2025-07-31 09:26:38 +03:00
llm-eval/lm-eval-harness chore: update doc by replacing use_cuda_graph with cuda_graph_config (#5680) 2025-07-04 15:39:15 +09:00
lookahead doc: fix path after examples migration (#3814) 2025-04-24 02:36:45 +08:00
medusa feat: adding multimodal (only image for now) support in trtllm-bench (#3490) 2025-04-18 07:06:16 +08:00
models [doc][ci][Qwen3][nvbugs 5374145] Add Qwen3 235B eagle3 CI (#6477) 2025-07-31 09:37:23 +08:00
ngram [chore] Clean up quickstart_advanced.py (#6021) 2025-07-21 15:00:59 -04:00
openai_triton Update TensorRT-LLM (#2792) 2025-02-18 21:27:39 +08:00
python_plugin Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
quantization feat: Add support for fp8 rowwise quantization (#4876) 2025-06-14 06:37:48 -07:00
redrafter ReDrafter support for Qwen (#4875) 2025-06-28 02:33:10 +08:00
sample_weight_stripping chore: Mass integration of release/0.20 (#4898) 2025-06-08 23:26:26 +08:00
scaffolding [https://nvbugs/5387375] fix(scaffolding): fix scaffolding aime test in test_e2e (#6140) 2025-07-18 10:34:37 +08:00
serve chore: update trtllm-serve usage doc by removing backend parameter when it use torch as backend. (#6419) 2025-07-30 11:11:06 -04:00
trtllm-eval test: Add LLGuidance test and refine guided decoding (#5348) 2025-06-25 14:12:56 +08:00
wide_ep doc: remove backend parameter for trtllm-bench when backend is set to… (#6428) 2025-07-29 11:01:21 -04:00
constraints.txt chore: bump version to 1.0.0rc5 (#6252) 2025-07-22 16:24:28 +08:00
eval_long_context.py Feat: Variable-Beam-Width-Search (VBWS) part3 (#3338) 2025-04-08 23:51:27 +08:00
generate_checkpoint_config.py Update TensorRT-LLM (#2562) 2024-12-11 00:31:05 -08:00
generate_xgrammar_tokenizer_info.py Update TensorRT-LLM (#2783) 2025-02-13 18:40:22 +08:00
hf_lora_convert.py Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
mmlu.py feat: run mmlu and summarize without engine_dir. (#4056) 2025-05-05 19:35:07 +08:00
run.py [nvbug/5374773] chore: Add a runtime flag to enable fail fast when attn window is too large to fit at least one sequence in KV cache (#5974) 2025-07-25 18:10:40 -04:00
summarize.py [refactor] Unify name of NGram speculative decoding (#5937) 2025-07-19 12:59:57 +08:00
utils.py [refactor] Unify name of NGram speculative decoding (#5937) 2025-07-19 12:59:57 +08:00