.. |
anthropic
|
Handle Anthropic streamable errors (#1113)
|
2024-04-16 16:25:32 -07:00 |
azureOpenAi
|
Stop generation button during stream-response (#892)
|
2024-03-12 15:21:27 -07:00 |
gemini
|
Add support for Gemini-1.5 Pro (#1134)
|
2024-04-19 08:59:46 -07:00 |
genericOpenAi
|
Add generic OpenAI endpoint support (#1178)
|
2024-04-23 13:06:07 -07:00 |
groq
|
[FEAT] Add support for more groq models (Llama 3 and Gemma) (#1143)
|
2024-04-22 13:14:27 -07:00 |
huggingface
|
Stop generation button during stream-response (#892)
|
2024-03-12 15:21:27 -07:00 |
lmStudio
|
Patch LMStudio Inference server bug integration (#957)
|
2024-03-22 14:39:30 -07:00 |
localAi
|
Refactor LLM chat backend (#717)
|
2024-02-14 12:32:07 -08:00 |
mistral
|
Refactor LLM chat backend (#717)
|
2024-02-14 12:32:07 -08:00 |
native
|
Stop generation button during stream-response (#892)
|
2024-03-12 15:21:27 -07:00 |
ollama
|
useMLock for Ollama API chats (#1014)
|
2024-04-02 10:43:04 -07:00 |
openAi
|
Enable dynamic GPT model dropdown (#1111)
|
2024-04-16 14:54:39 -07:00 |
openRouter
|
Strengthen field validations on user Updates (#1201)
|
2024-04-26 16:46:04 -07:00 |
perplexity
|
update perplexity models
|
2024-04-25 07:34:28 -07:00 |
togetherAi
|
bump togetherai models Apr 18, 2024
|
2024-04-18 16:28:43 -07:00 |