TensorRT-LLMs/examples/llm-api
coldwaterq 1cf0e672e7
fix: [nvbugs/5066257] serialization improvments (#3869)
* added a restricted pcikler and depickler in a sepparate serialization function.

Signed-off-by: coldwaterq@users.noreply.github.com <coldwaterq@users.noreply.github.com>

* updated IPC to remove approved classes, removed the serialization function because it didn't work for all objects that made debugging harder, added tests.

Signed-off-by: coldwaterq@users.noreply.github.com <coldwaterq@users.noreply.github.com>

* removed LLM arg and moved class registration to a serialization module function. Also added missing classes to approved list.

Signed-off-by: coldwaterq <coldwaterq@users.noreply.github.com>

* cleaned up a couple files to reduce conflicts with main.

Signed-off-by: coldwaterq <coldwaterq@users.noreply.github.com>

* fix unit tests

Signed-off-by: Yibin Li <109242046+yibinl-nvidia@users.noreply.github.com>

* reorder BASE_ZMQ_CLASSES list alphabetically

Signed-off-by: Yibin Li <109242046+yibinl-nvidia@users.noreply.github.com>

* fix tests and move LogitsProcessor registration to base class

Signed-off-by: Yibin Li <109242046+yibinl-nvidia@users.noreply.github.com>

* revert changes to import log of tensorrt_llm._torch.models

Signed-off-by: Yibin Li <109242046+yibinl-nvidia@users.noreply.github.com>

* added comments to explain why BASE_ZMQ_CLASSES has to be passed into spawned child processes

Signed-off-by: coldwaterq <coldwaterq@users.noreply.github.com>

* fix tests and move LogitsProcessor registration to base class

Signed-off-by: Yibin Li <109242046+yibinl-nvidia@users.noreply.github.com>

* additional comments for multiprocess approved list sync

Signed-off-by: Yibin Li <109242046+yibinl-nvidia@users.noreply.github.com>

* add dataclass from tests

Signed-off-by: Yibin Li <109242046+yibinl-nvidia@users.noreply.github.com>

---------

Signed-off-by: coldwaterq@users.noreply.github.com <coldwaterq@users.noreply.github.com>
Signed-off-by: coldwaterq <coldwaterq@users.noreply.github.com>
Signed-off-by: Yibin Li <109242046+yibinl-nvidia@users.noreply.github.com>
Co-authored-by: Yibin Li <109242046+yibinl-nvidia@users.noreply.github.com>
2025-05-23 13:06:29 +08:00
..
llm_auto_parallel.py Update TensorRT-LLM (#2562) 2024-12-11 00:31:05 -08:00
llm_eagle_decoding.py Update TensorRT-LLM (#2849) 2025-03-04 18:44:00 +08:00
llm_guided_decoding.py Update TensorRT-LLM (#2562) 2024-12-11 00:31:05 -08:00
llm_inference_async_streaming.py Update TensorRT-LLM (#2562) 2024-12-11 00:31:05 -08:00
llm_inference_async.py Update TensorRT-LLM (#2562) 2024-12-11 00:31:05 -08:00
llm_inference_customize.py chore: Cleanup deprecated APIs from LLM-API (part 1/2) (#3732) 2025-05-07 13:20:25 +08:00
llm_inference_distributed.py Update TensorRT-LLM (#2562) 2024-12-11 00:31:05 -08:00
llm_inference_kv_events.py Breaking change: perf: Enable scheduling overlap by default (#4174) 2025-05-15 14:27:36 +08:00
llm_inference.py Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
llm_logits_processor.py fix: [nvbugs/5066257] serialization improvments (#3869) 2025-05-23 13:06:29 +08:00
llm_lookahead_decoding.py Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
llm_medusa_decoding.py Update TensorRT-LLM (#2849) 2025-03-04 18:44:00 +08:00
llm_mgmn_llm_distributed.sh make LLM-API slurm examples executable (#3402) 2025-04-13 21:42:45 +08:00
llm_mgmn_trtllm_bench.sh Breaking change: perf: Enable scheduling overlap by default (#4174) 2025-05-15 14:27:36 +08:00
llm_mgmn_trtllm_serve.sh [TRTQA-2802][fix]: add --host for mgmn serve examples script (#4175) 2025-05-12 13:28:42 +08:00
llm_multilora.py Update TensorRT-LLM (#2562) 2024-12-11 00:31:05 -08:00
llm_quantization.py feat: use cudaMalloc to allocate kvCache (#3303) 2025-04-08 10:59:14 +08:00
quickstart_example.py Update TensorRT-LLM (#2562) 2024-12-11 00:31:05 -08:00
README.md chore: Mass Integration 0.19 (#4255) 2025-05-16 10:53:25 +02:00

LLM API Examples

Please refer to the official documentation, examples and customization for detailed information and usage guidelines regarding the LLM API.