fixed requests in batch being actually sequential - https://github.com/meta-llama/synthetic-data-kit/issues/67#68
Conversation
…t as configurable parameter
|
Hi @HarshVaragiya! Thank you for your pull request and welcome to our community. Action RequiredIn order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you. ProcessIn order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA. Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with If you have received this in error or have any questions, please contact us at cla@meta.com. Thanks! |
|
Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks! |
|
Just a thought here: Something like a requests queue or something similar to golang channels to keep the vLLM server fed with requests at all times would help improve the overall throughput as even with the batching fixed, there are instances where the GPU is left processing 1 or 2 long requests much slowly when the other requests in the batch are marked done and the next batch is not sent yet by synthetic-data-kit. |
Fixed the requests to be async and concurrent using aiohttp to work correctly.
Pull Request
Description
Modified the synthetic-data-kit/models/llm_client.py function LLMClient._vllm_batch_completion to fix the requests not being concurrent. Now the requests are sent via the aiohttp client in parallel and vLLM backend can process it in parallel.
Modified the init function for the LLMClient class to expose
http_request_timeoutconfiguration key for the HTTP Timeout value configuration for all vLLM requests.Fixes #67
Type of change
Please non-releavant options
vllm.http_request_timeoutwith same default as before)