anything-llm/server/utils/AiProviders
Timothy Carambat a8ec0d9584
Compensate for upper OpenAI emedding limit chunk size (#292)
Limit is due to POST body max size. Sufficiently large requests will abort automatically
We should report that error back on the frontend during embedding
Update vectordb providers to return on failed
2023-10-26 10:57:37 -07:00
..
azureOpenAi Make openAI Azure embedding requests run concurrently to avoid input limits per call (#211) 2023-08-22 10:23:29 -07:00
openAi Compensate for upper OpenAI emedding limit chunk size (#292) 2023-10-26 10:57:37 -07:00