Commit Graph

9 Commits

Author SHA1 Message Date
Sean Hatfield
e99c74aec1
[DOCS] Update Docker documentation to show how to setup Ollama with Dockerized version of AnythingLLM (#774)
* update HOW_TO_USE_DOCKER to help with Ollama setup using docker

* update HOW_TO_USE_DOCKER

* styles update

* create separate README for ollama and link to it in HOW_TO_USE_DOCKER

* styling update
2024-02-21 18:42:32 -08:00
Timothy Carambat
c59ab9da0a
Refactor LLM chat backend (#717)
* refactor stream/chat/embed-stram to be a single execution logic path so that it is easier to maintain and build upon

* no thread in sync chat since only api uses it
adjust import locations
2024-02-14 12:32:07 -08:00
Timothy Carambat
f490c35456
Recover from fatal Ollama crash from LangChain library (#693)
Resolve fatal crash from Ollama failure
2024-02-07 16:23:17 -08:00
Timothy Carambat
aca5940650
Refactor handleStream to LLM Classes (#685) 2024-02-07 08:15:14 -08:00
Sean Hatfield
c2c8fe9756
add support for mistral api (#610)
* add support for mistral api

* update docs to show support for Mistral

* add default temp to all providers, suggest different results per provider

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-01-17 14:42:05 -08:00
Sean Hatfield
90df37582b
Per workspace model selection (#582)
* WIP model selection per workspace (migrations and openai saves properly

* revert OpenAiOption

* add support for models per workspace for anthropic, localAi, ollama, openAi, and togetherAi

* remove unneeded comments

* update logic for when LLMProvider is reset, reset Ai provider files with master

* remove frontend/api reset of workspace chat and move logic to updateENV
add postUpdate callbacks to envs

* set preferred model for chat on class instantiation

* remove extra param

* linting

* remove unused var

* refactor chat model selection on workspace

* linting

* add fallback for base path to localai models

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-01-17 12:59:25 -08:00
Timothy Carambat
6d5968bf7e
Llm chore cleanup (#501)
* move internal functions to private in class
simplify lc message convertor

* Fix hanging Context text when none is present
2023-12-28 14:42:34 -08:00
Timothy Carambat
2a1202de54
Patch Ollama Streaming chunk issues (#500)
Replace stream/sync chats with Langchain interface for now
connect #499
ref: https://github.com/Mintplex-Labs/anything-llm/issues/495#issuecomment-1871476091
2023-12-28 13:59:47 -08:00
Timothy Carambat
e0a0a8976d
Add Ollama as LLM provider option (#494)
* Add support for Ollama as LLM provider
resolves #493
2023-12-27 17:21:47 -08:00