anything-llm/server/utils/AiProviders
Timothy Carambat df17fbda36
Add generic OpenAI endpoint support (#1178)
* Add generic OpenAI endpoint support

* allow any input for model in case provider does not support models endpoint
2024-04-23 13:06:07 -07:00
..
anthropic Handle Anthropic streamable errors (#1113) 2024-04-16 16:25:32 -07:00
azureOpenAi Stop generation button during stream-response (#892) 2024-03-12 15:21:27 -07:00
gemini Add support for Gemini-1.5 Pro (#1134) 2024-04-19 08:59:46 -07:00
genericOpenAi Add generic OpenAI endpoint support (#1178) 2024-04-23 13:06:07 -07:00
groq [FEAT] Add support for more groq models (Llama 3 and Gemma) (#1143) 2024-04-22 13:14:27 -07:00
huggingface Stop generation button during stream-response (#892) 2024-03-12 15:21:27 -07:00
lmStudio Patch LMStudio Inference server bug integration (#957) 2024-03-22 14:39:30 -07:00
localAi Refactor LLM chat backend (#717) 2024-02-14 12:32:07 -08:00
mistral Refactor LLM chat backend (#717) 2024-02-14 12:32:07 -08:00
native Stop generation button during stream-response (#892) 2024-03-12 15:21:27 -07:00
ollama useMLock for Ollama API chats (#1014) 2024-04-02 10:43:04 -07:00
openAi Enable dynamic GPT model dropdown (#1111) 2024-04-16 14:54:39 -07:00
openRouter 1173 dynamic cache openrouter (#1176) 2024-04-23 11:10:54 -07:00
perplexity Bump all static model providers (#1101) 2024-04-14 12:55:21 -07:00
togetherAi bump togetherai models Apr 18, 2024 2024-04-18 16:28:43 -07:00