This website requires JavaScript.
Explore
Help
Sign In
kanshan
/
TensorRT-LLMs
Watch
1
Star
0
Fork
0
You've already forked TensorRT-LLMs
mirror of
https://github.com/NVIDIA/TensorRT-LLM.git
synced
2026-01-31 08:11:27 +08:00
Code
Issues
Actions
1
Packages
Projects
Releases
Wiki
Activity
76c3a12bcb
TensorRT-LLMs
/
tensorrt_llm
/
bench
/
benchmark
History
tomeras91
5aa958a11a
[TRTLLM-5838][fix] fix max batch size and max tokens in kv cache estimations for Nemotron-H (
#5371
)
...
Signed-off-by: Tomer Asida <57313761+tomeras91@users.noreply.github.com>
2025-07-09 11:30:15 +03:00
..
utils
[TRTLLM-5838][fix] fix max batch size and max tokens in kv cache estimations for Nemotron-H (
#5371
)
2025-07-09 11:30:15 +03:00
__init__.py
Update TensorRT-LLM (
#2389
)
2024-10-29 22:24:38 +08:00
low_latency.py
[TRTLLM-5208][BREAKING CHANGE] chore: make pytorch LLM the default (
#5312
)
2025-06-20 03:01:10 +08:00
throughput.py
[AutoDeploy] merge feat/ad-2025-06-29 (
#5737
)
2025-07-04 10:21:18 +09:00