anything-llm/server/utils/AiProviders
Timothy Carambat 2a1202de54
Patch Ollama Streaming chunk issues (#500)
Replace stream/sync chats with Langchain interface for now
connect #499
ref: https://github.com/Mintplex-Labs/anything-llm/issues/495#issuecomment-1871476091
2023-12-28 13:59:47 -08:00
..
anthropic assume default model where appropriate (#366) 2023-11-13 15:17:22 -08:00
azureOpenAi Allow use of any embedder for any llm/update data handling modal (#386) 2023-11-16 15:19:49 -08:00
gemini Add LLM support for Google Gemini-Pro (#492) 2023-12-27 17:08:03 -08:00
lmStudio assume default model where appropriate (#366) 2023-11-13 15:17:22 -08:00
localAi Add API key option to LocalAI (#407) 2023-12-04 08:38:15 -08:00
native [Feature] AnythingLLM use locally hosted Llama.cpp and GGUF files for inferencing (#413) 2023-12-07 14:48:27 -08:00
ollama Patch Ollama Streaming chunk issues (#500) 2023-12-28 13:59:47 -08:00
openAi Allow use of any embedder for any llm/update data handling modal (#386) 2023-11-16 15:19:49 -08:00