anything-llm/server
Timothy Carambat a8ec0d9584
Compensate for upper OpenAI emedding limit chunk size (#292)
Limit is due to POST body max size. Sufficiently large requests will abort automatically
We should report that error back on the frontend during embedding
Update vectordb providers to return on failed
2023-10-26 10:57:37 -07:00
..
endpoints Compensate for upper OpenAI emedding limit chunk size (#292) 2023-10-26 10:57:37 -07:00
models Compensate for upper OpenAI emedding limit chunk size (#292) 2023-10-26 10:57:37 -07:00
prisma Replace custom sqlite dbms with prisma (#239) 2023-09-28 14:00:03 -07:00
storage remove bakup db 2023-10-24 18:05:18 -07:00
swagger AnythingLLM UI overhaul (#278) 2023-10-23 13:10:34 -07:00
utils Compensate for upper OpenAI emedding limit chunk size (#292) 2023-10-26 10:57:37 -07:00
.env.example resolves #259 (#260) 2023-09-29 13:20:06 -07:00
.gitignore AnythingLLM UI overhaul (#278) 2023-10-23 13:10:34 -07:00
.nvmrc Implement Chroma Support (#1) 2023-06-07 21:31:35 -07:00
index.js Replace custom sqlite dbms with prisma (#239) 2023-09-28 14:00:03 -07:00
nodemon.json Full developer api (#221) 2023-08-23 19:15:07 -07:00
package-lock.json Replace custom sqlite dbms with prisma (#239) 2023-09-28 14:00:03 -07:00
package.json Replace custom sqlite dbms with prisma (#239) 2023-09-28 14:00:03 -07:00
yarn.lock Replace custom sqlite dbms with prisma (#239) 2023-09-28 14:00:03 -07:00