This website requires JavaScript.
Explore
Help
Sign In
kanshan
/
TensorRT-LLMs
Watch
1
Star
0
Fork
0
You've already forked TensorRT-LLMs
mirror of
https://github.com/NVIDIA/TensorRT-LLM.git
synced
2026-01-14 06:27:45 +08:00
Code
Issues
Actions
1
Packages
Projects
Releases
Wiki
Activity
fa34cb7234
TensorRT-LLMs
/
tensorrt_llm
/
bench
History
Yan Chunwei
a02606a9e2
[TRTLLM-5530][BREAKING CHANGE] refactor: unify KvCacheConfig in LLM class for pytorch backend (
#5752
)
...
Signed-off-by: Superjomn <328693+Superjomn@users.noreply.github.com>
2025-07-16 16:42:59 +08:00
..
benchmark
[TRTLLM-5530][BREAKING CHANGE] refactor: unify KvCacheConfig in LLM class for pytorch backend (
#5752
)
2025-07-16 16:42:59 +08:00
build
[TRTLLM-5838][fix] fix max batch size and max tokens in kv cache estimations for Nemotron-H (
#5371
)
2025-07-09 11:30:15 +03:00
dataclasses
[TRTLLM-5530][BREAKING CHANGE] refactor: unify KvCacheConfig in LLM class for pytorch backend (
#5752
)
2025-07-16 16:42:59 +08:00
utils
Enable trtllm-bench to run LoRA and add basic e2e perf testing capability for LoRA in PyT flow (
#5130
)
2025-06-15 18:54:04 +03:00
__init__.py
Update TensorRT-LLM
2024-08-20 18:55:15 +08:00