| .. |
|
__init__.py
|
test: reorganize tests folder hierarchy (#2996)
|
2025-03-27 12:07:53 +08:00 |
|
_test_llm_chat.py
|
Add thread leak check and fix thread/memory leak issues. (#3270)
|
2025-04-08 19:03:18 +08:00 |
|
_test_llm_server.py
|
Add thread leak check and fix thread/memory leak issues. (#3270)
|
2025-04-08 19:03:18 +08:00 |
|
_test_openai_chat_multimodal.py
|
feat: enhance trtllm serve multimodal (#3757)
|
2025-05-15 16:16:31 -07:00 |
|
_test_openai_chat_structural_tag.py
|
chore [BREAKING CHANGE]: Flatten PyTorchConfig knobs into TorchLlmArgs (#4603)
|
2025-05-28 18:43:04 +08:00 |
|
_test_openai_chat.py
|
[TRTLLM-1658][feat] Enable multiple response in trtllm-serve for TRT backend (#4623)
|
2025-05-28 11:36:44 +08:00 |
|
_test_openai_completions.py
|
test: reorganize tests folder hierarchy (#2996)
|
2025-03-27 12:07:53 +08:00 |
|
_test_openai_consistent_chat.py
|
[TRTLLM-5208][BREAKING CHANGE] chore: make pytorch LLM the default (#5312)
|
2025-06-20 03:01:10 +08:00 |
|
_test_openai_lora.py
|
[TRTLLM-5831][feat] Add LoRA support for pytorch backend in trtllm-serve (#5376)
|
2025-06-29 12:46:30 +00:00 |
|
_test_openai_metrics.py
|
[TRTLLM-5208][BREAKING CHANGE] chore: make pytorch LLM the default (#5312)
|
2025-06-20 03:01:10 +08:00 |
|
_test_openai_misc.py
|
[fix][ci] move torch tests to run under torch stage (#5473)
|
2025-06-26 14:31:38 +03:00 |
|
_test_openai_multi_chat.py
|
[TRTLLM-5208][BREAKING CHANGE] chore: make pytorch LLM the default (#5312)
|
2025-06-20 03:01:10 +08:00 |
|
_test_openai_multi_gpu.py
|
test: reorganize tests folder hierarchy (#2996)
|
2025-03-27 12:07:53 +08:00 |
|
_test_openai_multi_nodes.py
|
test: reorganize tests folder hierarchy (#2996)
|
2025-03-27 12:07:53 +08:00 |
|
_test_openai_reasoning.py
|
start OAIServer with max_beam_width=1 for TorchSampler (#5427)
|
2025-06-25 15:52:06 +08:00 |
|
_test_trtllm_serve_benchmark.py
|
tests: [TRTQA-2906] add benchmark serving tests (#4901)
|
2025-06-05 14:33:03 +08:00 |
|
_test_trtllm_serve_duplicated_args.py
|
[feat] Allow overriding cli args with yaml file in trtllm-serve (#4164)
|
2025-05-08 21:19:05 -04:00 |
|
_test_trtllm_serve_example.py
|
doc: add genai-perf benchmark & slurm multi-node for trtllm-serve doc (#3407)
|
2025-04-16 00:11:58 +08:00 |
|
_test_trtllm_serve_lora.py
|
[TRTLLM-5831][feat] Add LoRA support for pytorch backend in trtllm-serve (#5376)
|
2025-06-29 12:46:30 +00:00 |
|
_test_trtllm_serve_multimodal_example.py
|
chore: Partition LlmArgs into TorchLlmArgs and TrtLlmArgs (#3823)
|
2025-05-22 09:40:56 +08:00 |
|
openai_server.py
|
feat: support abort disconnected requests (#3214)
|
2025-04-07 16:14:58 +08:00 |
|
README.md
|
Update TensorRT-LLM (#2936)
|
2025-03-18 21:25:19 +08:00 |