TensorRT-LLMs/tensorrt_llm/executor
pcastonguay fe6f14b2b1
fix: Fixing issue with first gen token being returned twice in streaming (#3427)
* fix: Fixing issue with first gen token being returned twice with streaming

Signed-off-by: Patrice Castonguay <55748270+pcastonguay@users.noreply.github.com>

* Fixing not_expectring_strings in test

Signed-off-by: Patrice Castonguay <55748270+pcastonguay@users.noreply.github.com>

---------

Signed-off-by: Patrice Castonguay <55748270+pcastonguay@users.noreply.github.com>
Co-authored-by: QI JUN <22017000+QiJune@users.noreply.github.com>
2025-04-13 22:45:09 -04:00
..
__init__.py Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
executor.py Update (#2978) 2025-03-23 16:39:35 +08:00
ipc.py chore: code cleanup for error logging and SharedMemory in proxy.py (#3432) 2025-04-10 21:57:06 +08:00
postproc_worker.py chore: code cleanup for error logging and SharedMemory in proxy.py (#3432) 2025-04-10 21:57:06 +08:00
proxy.py fix: Fixing issue with first gen token being returned twice in streaming (#3427) 2025-04-13 22:45:09 -04:00
request.py Update (#2978) 2025-03-23 16:39:35 +08:00
result.py fix: Fixing issue with first gen token being returned twice in streaming (#3427) 2025-04-13 22:45:09 -04:00
utils.py make LLM-API slurm examples executable (#3402) 2025-04-13 21:42:45 +08:00
worker.py fix: Fixing issue with first gen token being returned twice in streaming (#3427) 2025-04-13 22:45:09 -04:00