server: continue to update other slots on embedding concurrent request (#5699)

* server: #5655 - continue to update other slots on embedding concurrent request.

* server: tests: add multi users embeddings as fixed

* server: tests: adding OAI compatible embedding concurrent endpoint

* server: tests: adding OAI compatible embedding with multiple inputs
This commit is contained in:
Pierrick Hymbert
2024-02-24 19:16:04 +01:00
committed by GitHub
parent 4c4cb30736
commit 9e359a4f47
5 changed files with 168 additions and 78 deletions

View File

@@ -1836,7 +1836,7 @@ struct llama_server_context
send_embedding(slot);
slot.release();
slot.i_batch = -1;
return true;
continue;
}
completion_token_output result;