anything-llm/server/utils
Timothy Carambat 2a1202de54
Patch Ollama Streaming chunk issues (#500)
Replace stream/sync chats with Langchain interface for now
connect #499
ref: https://github.com/Mintplex-Labs/anything-llm/issues/495#issuecomment-1871476091
2023-12-28 13:59:47 -08:00
..
AiProviders Patch Ollama Streaming chunk issues (#500) 2023-12-28 13:59:47 -08:00
chats Patch Ollama Streaming chunk issues (#500) 2023-12-28 13:59:47 -08:00
database Full developer api (#221) 2023-08-23 19:15:07 -07:00
EmbeddingEngines fix: fully separate chunkconcurrency from chunk length 2023-12-20 11:20:40 -08:00
files chore: Force VectorCache to always be on; 2023-12-20 10:45:03 -08:00
helpers Prevent external service localhost question (#497) 2023-12-28 10:47:02 -08:00
http Add API key option to LocalAI (#407) 2023-12-04 08:38:15 -08:00
middleware Create manager role and limit default role (#351) 2023-11-13 14:51:16 -08:00
prisma Add built-in embedding engine into AnythingLLM (#411) 2023-12-06 10:36:22 -08:00
telemetry Replace custom sqlite dbms with prisma (#239) 2023-09-28 14:00:03 -07:00
vectorDbProviders feat: add support for variable chunk length (#415) 2023-12-07 16:27:36 -08:00