Sean Hatfield
524edd6e69
[FEAT] Add support for Claude Sonnet 3.5 model ( #1731 )
...
add support for claude sonnet 3.5 model
2024-06-20 10:13:53 -07:00
Sean Hatfield
3f78ef413b
[FEAT] Support for gemini-1.0-pro model and fixes to prompt window limit ( #1557 )
...
support for gemini-1.0-pro model and fixes to prompt window limit
2024-05-29 08:17:35 +08:00
Timothy Carambat
2f9b785f42
Patch handling of end chunk stream events for OpenAI endpoints ( #1487 )
...
* Patch handling of end chunk stream events for OpenAI endpoints
* update LiteLLM to use generic handler
* update for empty choices
2024-05-23 10:20:40 -07:00
Sean Hatfield
cc7e7fb3ac
[FEAT] Add support for gemini-1.5-flash-latest model ( #1502 )
...
* add support for gemini-1.5-flash-latest
* update comment in gemini LLM provider
2024-05-23 09:42:30 -07:00
timothycarambat
9f327d015a
update error handling for OpenAI providers
2024-05-22 09:58:10 -05:00
Timothy Carambat
28eba636e9
Allow setting of safety thresholds for Gemini ( #1466 )
...
* Allow setting of safety thresholds for Gemini
* linting
2024-05-20 13:17:00 -05:00
Timothy Carambat
9ace0e67e6
Validate max_tokens is number ( #1445 )
2024-05-17 21:44:55 -07:00
Timothy Carambat
01cf2fed17
Make native embedder the fallback for all LLMs ( #1427 )
2024-05-16 17:25:05 -07:00
Sean Hatfield
826ef00da3
[FEAT] LiteLLM provider support ( #1424 )
...
* litellm LLM provider support
* fix lint error
* change import orders
fix issue with model retrieval
---------
Co-authored-by: Timothy Carambat <rambat1010@gmail.com>
2024-05-16 13:56:28 -07:00
Timothy Carambat
64b62290d7
Set gpt-4o as default for OpenAI ( #1391 )
2024-05-13 14:31:49 -07:00
Sean Hatfield
9ed2309757
[FEAT] Add API key support for Oobabooga Web UI ( #1354 )
...
* add api key support for oobabooga web ui
* dont expose API Key for TextWebGenUi
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-05-13 12:58:16 -07:00
Sean Hatfield
948ac8a3dd
[FIX] Validate messages schema for gemini provider ( #1351 )
...
validate messages schema for gemini provider
2024-05-10 17:33:25 -07:00
Sean Hatfield
0a6a9e40c1
[FIX] Add max tokens field to generic OpenAI LLM connector ( #1345 )
...
* add max tokens field to generic openai llm connector
* add max_tokens property to generic openai agent provider
2024-05-10 14:49:02 -07:00
Sean Hatfield
977a07db86
[FEAT] Text Generation Web UI LLM provider support ( #1279 )
...
* add text gen web ui LLM provider support
* update README
* README typo
* update TextWebUI display name
patch workspace<>model support for provider
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-05-08 11:56:30 -07:00
Sean Hatfield
fc77b46800
[FEAT] KoboldCPP LLM Support ( #1268 )
...
* koboldcpp LLM support
* update .env.examples for koboldcpp support
* update LLM preference order
update koboldcpp comments
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-05-02 12:12:44 -07:00
Sean Hatfield
3caebc47b4
[FEAT] Cohere LLM and embedder support ( #1233 )
...
* getChatCompletion working WIP streaming
* WIP
* working streaming WIP abort stream
* implement cohere embedder support
* remove inputType option from cohere embedder
* fix cohere LLM from not aborting stream when canceled by user
* Patch Cohere implemention
* add cohere to onboarding
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-05-02 10:35:50 -07:00
Sean Hatfield
9feaad79cc
[CHORE] Remove sendChat and streamChat in all LLM providers ( #1260 )
...
* remove sendChat and streamChat functions/references in all LLM providers
* remove unused imports
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-05-01 16:52:28 -07:00
Timothy Carambat
547d4859ef
Bump openai
package to latest ( #1234 )
...
* Bump `openai` package to latest
Tested all except localai
* bump LocalAI support with latest image
* add deprecation notice
* linting
2024-04-30 12:33:42 -07:00
Timothy Carambat
94017e2b51
bump langchain deps ( #1231 )
...
* bump langchain deps
* patch native and ollama providers remove deprecated deps
---------
Co-authored-by: shatfield4 <seanhatfield5@gmail.com>
2024-04-30 12:04:24 -07:00
Timothy Carambat
1b35bcbeab
Strengthen field validations on user Updates ( #1201 )
...
* Strengthen field validations on user Updates
* update writables
2024-04-26 16:46:04 -07:00
timothycarambat
df2c01b176
patch OpenRouter model fetcher when key is not present
2024-04-26 15:58:30 -07:00
timothycarambat
dfaaf1680f
update perplexity models
...
resolves #1188
2024-04-25 07:34:28 -07:00
Timothy Carambat
df17fbda36
Add generic OpenAI endpoint support ( #1178 )
...
* Add generic OpenAI endpoint support
* allow any input for model in case provider does not support models endpoint
2024-04-23 13:06:07 -07:00
Timothy Carambat
ac6ca13f60
1173 dynamic cache openrouter ( #1176 )
...
* patch agent invocation rule
* Add dynamic model cache from OpenRouter API for context length and available models
2024-04-23 11:10:54 -07:00
Sean Hatfield
897e168fd1
[FEAT] Add support for more groq models (Llama 3 and Gemma) ( #1143 )
...
add support for more groq models
2024-04-22 13:14:27 -07:00
Timothy Carambat
58b744771f
Add support for Gemini-1.5 Pro ( #1134 )
...
* Add support for Gemini-1.5 Pro
bump @google/generative-ai pkg
Toggle apiVersion if beta model selected
resolves #1109
* update API messages due to package change
2024-04-19 08:59:46 -07:00
timothycarambat
e28c0469f4
bump togetherai models Apr 18, 2024
...
resolves #1126
2024-04-18 16:28:43 -07:00
Timothy Carambat
f9ac27e9a4
Handle Anthropic streamable errors ( #1113 )
2024-04-16 16:25:32 -07:00
Timothy Carambat
661563408a
Enable dynamic GPT model dropdown ( #1111 )
...
* Enable dynamic GPT model dropdown
2024-04-16 14:54:39 -07:00
Timothy Carambat
8306098b08
Bump all static model providers ( #1101 )
2024-04-14 12:55:21 -07:00
Timothy Carambat
6fde5570b3
remove unneeded answerKey for Anthropic ( #1100 )
...
resolves #1096
2024-04-14 12:04:38 -07:00
Timothy Carambat
df2aac9f3c
useMLock for Ollama API chats ( #1014 )
2024-04-02 10:43:04 -07:00
Timothy Carambat
0dd6001fa6
Patch Gemini/Google AI errors ( #977 )
2024-03-26 17:20:12 -07:00
Timothy Carambat
1135853740
Patch LMStudio Inference server bug integration ( #957 )
2024-03-22 14:39:30 -07:00
Sean Hatfield
ac0e62d490
[FEAT] Anthropic Haiku model support ( #901 )
...
add Haiku model support
2024-03-13 17:32:02 -07:00
Timothy Carambat
0e46a11cb6
Stop generation button during stream-response ( #892 )
...
* Stop generation button during stream-response
* add custom stop icon
* add stop to thread chats
2024-03-12 15:21:27 -07:00
Sean Hatfield
e0d5d8039a
[FEAT] Claude 3 support and implement new version of Anthropic SDK ( #863 )
...
* implement new version of anthropic sdk and support new models
* remove handleAnthropicStream and move to handleStream inside anthropic provider
* update useGetProvidersModels for new anthropic models
2024-03-06 14:57:47 -08:00
Sean Hatfield
0634013788
[FEAT] Groq LLM support ( #865 )
...
* Groq LLM support complete
* update useGetProvidersModels for groq models
* Add definiations
update comments and error log reports
add example envs
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-03-06 14:48:38 -08:00
Timothy Carambat
a385ea3d82
CHORE: bump pplx model support ( #791 )
...
bump pplx model support
2024-02-23 17:33:16 -08:00
Sean Hatfield
633f425206
[FEAT] OpenRouter integration ( #784 )
...
* WIP openrouter integration
* add OpenRouter options to onboarding flow and data handling
* add todo to fix headers for rankings
* OpenRouter LLM support complete
* Fix hanging response stream with OpenRouter
update tagline
update comment
* update timeout comment
* wait for first chunk to start timer
* sort OpenRouter models by organization
* uppercase first letter of organization
* sort grouped models by org
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-02-23 17:18:58 -08:00
Sean Hatfield
80ced5eba4
[FEAT] PerplexityAI Support ( #778 )
...
* add LLM support for perplexity
* update README & example env
* fix ENV keys in example env files
* slight changes for QA of perplexity support
* Update Perplexity AI name
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-02-22 12:48:57 -08:00
Sean Hatfield
e99c74aec1
[DOCS] Update Docker documentation to show how to setup Ollama with Dockerized version of AnythingLLM ( #774 )
...
* update HOW_TO_USE_DOCKER to help with Ollama setup using docker
* update HOW_TO_USE_DOCKER
* styles update
* create separate README for ollama and link to it in HOW_TO_USE_DOCKER
* styling update
2024-02-21 18:42:32 -08:00
Timothy Carambat
791c0ee9dc
Enable ability to do full-text query on documents ( #758 )
...
* Enable ability to do full-text query on documents
Show alert modal on first pin for client
Add ability to use pins in stream/chat/embed
* typo and copy update
* simplify spread of context and sources
2024-02-21 13:15:45 -08:00
Timothy Carambat
c59ab9da0a
Refactor LLM chat backend ( #717 )
...
* refactor stream/chat/embed-stram to be a single execution logic path so that it is easier to maintain and build upon
* no thread in sync chat since only api uses it
adjust import locations
2024-02-14 12:32:07 -08:00
Timothy Carambat
f490c35456
Recover from fatal Ollama crash from LangChain library ( #693 )
...
Resolve fatal crash from Ollama failure
2024-02-07 16:23:17 -08:00
Timothy Carambat
aca5940650
Refactor handleStream to LLM Classes ( #685 )
2024-02-07 08:15:14 -08:00
Timothy Carambat
2bc11d3f1a
Implement support for HuggingFace Inference Endpoints ( #680 )
2024-02-06 09:17:51 -08:00
Sean Hatfield
21653b09fc
[FEAT] add gpt-4-turbo-preview ( #651 )
...
* add gpt-4-turbo-preview
* add gpt-4-turbo-preview to valid models
2024-01-26 13:03:50 -08:00
Sean Hatfield
62cea07599
add gpt-3.5-turbo-1106 model for openai LLM ( #636 )
...
* add gpt-3.5-turbo-1106 model for openai LLM
* add gpt-3.5-turbo-1106 as valid model for backend and per workspace model selection
2024-01-22 13:19:47 -08:00
Sean Hatfield
3fe7a25759
add token context limit for native llm settings ( #614 )
...
Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-01-17 16:25:30 -08:00