anything-llm/server/utils
Sean Hatfield 90df37582b
Per workspace model selection (#582)
* WIP model selection per workspace (migrations and openai saves properly

* revert OpenAiOption

* add support for models per workspace for anthropic, localAi, ollama, openAi, and togetherAi

* remove unneeded comments

* update logic for when LLMProvider is reset, reset Ai provider files with master

* remove frontend/api reset of workspace chat and move logic to updateENV
add postUpdate callbacks to envs

* set preferred model for chat on class instantiation

* remove extra param

* linting

* remove unused var

* refactor chat model selection on workspace

* linting

* add fallback for base path to localai models

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-01-17 12:59:25 -08:00
..
AiProviders Per workspace model selection (#582) 2024-01-17 12:59:25 -08:00
boot 523-Added support for HTTPS to Server. (#524) 2024-01-04 17:22:15 -08:00
chats Per workspace model selection (#582) 2024-01-17 12:59:25 -08:00
database Full developer api (#221) 2023-08-23 19:15:07 -07:00
EmbeddingEngines Fix present diminsions on vectorDBs to be inferred for providers who require it (#605) 2024-01-16 13:41:01 -08:00
files 570 document api return object (#608) 2024-01-16 16:04:22 -08:00
helpers Per workspace model selection (#582) 2024-01-17 12:59:25 -08:00
http prevent manager in multi-user from updatingENV via HTTP (#576) 2024-01-11 12:11:45 -08:00
middleware Change pwd check to O(1) check to prevent timing attacks - single user mode (#575) 2024-01-11 10:54:55 -08:00
prisma Add built-in embedding engine into AnythingLLM (#411) 2023-12-06 10:36:22 -08:00
telemetry Replace custom sqlite dbms with prisma (#239) 2023-09-28 14:00:03 -07:00
vectorDbProviders Fix present diminsions on vectorDBs to be inferred for providers who require it (#605) 2024-01-16 13:41:01 -08:00