anything-llm/server/endpoints
Sean Hatfield 90df37582b
Per workspace model selection (#582)
* WIP model selection per workspace (migrations and openai saves properly

* revert OpenAiOption

* add support for models per workspace for anthropic, localAi, ollama, openAi, and togetherAi

* remove unneeded comments

* update logic for when LLMProvider is reset, reset Ai provider files with master

* remove frontend/api reset of workspace chat and move logic to updateENV
add postUpdate callbacks to envs

* set preferred model for chat on class instantiation

* remove extra param

* linting

* remove unused var

* refactor chat model selection on workspace

* linting

* add fallback for base path to localai models

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-01-17 12:59:25 -08:00
..
api Per workspace model selection (#582) 2024-01-17 12:59:25 -08:00
extensions Add ability to grab youtube transcripts via doc processor (#470) 2023-12-18 17:17:26 -08:00
admin.js Warning about switching embedder or vectordb (#385) 2023-11-16 14:35:14 -08:00
chat.js Implement streaming for workspace chats via API (#604) 2024-01-16 10:37:46 -08:00
invite.js Dynamic vector count on workspace settings (#567) 2024-01-10 13:18:48 -08:00
system.js Per workspace model selection (#582) 2024-01-17 12:59:25 -08:00
utils.js How to: run docker on remote IP 2023-08-15 11:36:07 -07:00
workspaces.js Document Processor v2 (#442) 2023-12-14 15:14:56 -08:00