Timothy Carambat
7342839e77
Passthrough agentModel for LMStudio ( #2499 )
2024-10-18 11:44:48 -07:00
Timothy Carambat
93d7ce6d34
Handle Bedrock models that cannot use system
prompts ( #2489 )
2024-10-16 12:31:04 -07:00
Sean Hatfield
fa528e0cf3
OpenAI o1 model support ( #2427 )
...
* support openai o1 models
* Prevent O1 use for agents
getter for isO1Model;
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-10-15 19:42:13 -07:00
Sean Hatfield
6674e5aab8
Support free-form input for workspace model for providers with no /models
endpoint ( #2397 )
...
* support generic openai workspace model
* Update UI for free form input for some providers
---------
Co-authored-by: Timothy Carambat <rambat1010@gmail.com>
2024-10-15 15:24:44 -07:00
Timothy Carambat
bce7988683
Integrate Apipie support directly ( #2470 )
...
resolves #2464
resolves #989
Note: Streaming not supported
2024-10-15 12:36:06 -07:00
a4v2d4
cadc09d71a
[FEAT] Add Llama 3.2 models to Fireworks AI's LLM selection dropdown ( #2384 )
...
Add Llama 3.2 3B and 1B models to Fireworks AI LLM selection
2024-09-28 15:30:56 -07:00
Sean Hatfield
7390bae6f6
Support DeepSeek ( #2377 )
...
* add deepseek support
* lint
* update deepseek context length
* add deepseek to onboarding
---------
Co-authored-by: Timothy Carambat <rambat1010@gmail.com>
2024-09-26 12:55:12 -07:00
Timothy Carambat
a781345a0d
Enable Mistral Multimodal ( #2343 )
...
* Enable Mistral Multimodal
* remove console
2024-09-21 16:17:17 -05:00
Timothy Carambat
a30fa9b2ed
1943 add fireworksai support ( #2300 )
...
* Issue #1943 : Add support for LLM provider - Fireworks AI
* Update UI selection boxes
Update base AI keys for future embedder support if needed
Add agent capabilites for FireworksAI
* class only return
---------
Co-authored-by: Aaron Van Doren <vandoren96+1@gmail.com>
2024-09-16 12:10:44 -07:00
Timothy Carambat
906eb70ca1
bump Perplexity models ( #2275 )
2024-09-12 13:13:47 -07:00
Timothy Carambat
c612239ecb
Add Gemini exp
models ( #2268 )
...
Add Gemini models
resolves #2263
2024-09-11 13:03:14 -07:00
Timothy Carambat
b4651aff35
Support gpt-4o for Azure deployments ( #2182 )
2024-08-26 14:35:42 -07:00
timothycarambat
cb7cb2d976
Add 405B to perplexity
2024-08-19 12:26:22 -07:00
Timothy Carambat
99f2c25b1c
Agent Context window + context window refactor. ( #2126 )
...
* Enable agent context windows to be accurate per provider:model
* Refactor model mapping to external file
Add token count to document length instead of char-count
refernce promptWindowLimit from AIProvider in central location
* remove unused imports
2024-08-15 12:13:28 -07:00
Shahar
4365d69359
Fix TypeError by replacing this.openai.createChatCompletion with the correct function call ( #2117 )
...
fixed new api syntax
2024-08-14 14:39:48 -07:00
PyKen
a2571024a9
Add prompt window limits for gpt-4o-* models ( #2104 )
2024-08-13 09:13:36 -07:00
Timothy Carambat
f06ef6180d
add exp model to v1Beta ( #2082 )
2024-08-09 14:19:49 -07:00
Sean Hatfield
7273c892a1
Ollama performance mode option ( #2014 )
...
* ollama performance mode option
* Change ENV prop
Move perf setting to advanced
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-08-02 13:29:17 -07:00
Timothy Carambat
ba8e4e5d3e
handle OpenRouter exceptions on streaming ( #2033 )
2024-08-02 12:23:39 -07:00
RahSwe
c55ef33fce
Gemini Pro 1.5, API support for 2M context and new experimental model ( #2031 )
2024-08-02 10:24:31 -07:00
timothycarambat
6dc3642661
Patch Groq preview models maxed to 8K tokens due to warning
2024-08-01 09:24:57 -07:00
timothycarambat
466bf7dc9c
Bump Perplexity and Together AI static model list
2024-07-31 10:58:34 -07:00
Timothy Carambat
38fc181238
Add multimodality support ( #2001 )
...
* Add multimodality support
* Add Bedrock, KoboldCpp,LocalAI,and TextWebGenUI multi-modal
* temp dev build
* patch bad import
* noscrolls for windows dnd
* noscrolls for windows dnd
* update README
* update README
* add multimodal check
2024-07-31 10:47:49 -07:00
Timothy Carambat
5e73dce506
Enable editing of OpenRouter stream timeout for slower connections ( #1994 )
2024-07-29 11:49:14 -07:00
timothycarambat
296f041564
path perplexity model ids
...
closes #1990
2024-07-28 16:29:18 -07:00
timothycarambat
7a2ffefdc3
update case stmt for duplicate groq model
2024-07-25 17:39:29 -07:00
Timothy Carambat
61e214aa8c
Add support for Groq /models endpoint ( #1957 )
...
* Add support for Groq /models endpoint
* linting
2024-07-24 08:35:52 -07:00
Timothy Carambat
9366e69d88
Add AWS bedrock support for LLM + agents ( #1935 )
...
add AWS bedrock support for LLM + agents
2024-07-23 16:35:37 -07:00
Timothy Carambat
76aa2a4fd4
Implement support for selecting basic keep_alive
times for Ollama ( #1920 )
2024-07-22 14:44:47 -07:00
Timothy Carambat
3198718975
Update references to new domain ( #1916 )
2024-07-22 11:05:34 -07:00
Timothy Carambat
5df6b5f7d9
Bump perplexity models ( #1905 )
...
* Added Supported Models Free Tier - chat_models.txt
Need to fill in correct Parameter Count.
* Bump perplexity model
closes #1901
closes #1900
---------
Co-authored-by: Tim-Hoekstra <135951177+Tim-Hoekstra@users.noreply.github.com>
2024-07-19 15:11:10 -07:00
Timothy Carambat
0b845fbb1c
Deprecate .isSafe
moderation ( #1790 )
...
Add type defs to helpers
2024-06-28 15:32:30 -07:00
Sean Hatfield
524edd6e69
[FEAT] Add support for Claude Sonnet 3.5 model ( #1731 )
...
add support for claude sonnet 3.5 model
2024-06-20 10:13:53 -07:00
Sean Hatfield
3f78ef413b
[FEAT] Support for gemini-1.0-pro model and fixes to prompt window limit ( #1557 )
...
support for gemini-1.0-pro model and fixes to prompt window limit
2024-05-29 08:17:35 +08:00
Timothy Carambat
2f9b785f42
Patch handling of end chunk stream events for OpenAI endpoints ( #1487 )
...
* Patch handling of end chunk stream events for OpenAI endpoints
* update LiteLLM to use generic handler
* update for empty choices
2024-05-23 10:20:40 -07:00
Sean Hatfield
cc7e7fb3ac
[FEAT] Add support for gemini-1.5-flash-latest model ( #1502 )
...
* add support for gemini-1.5-flash-latest
* update comment in gemini LLM provider
2024-05-23 09:42:30 -07:00
timothycarambat
9f327d015a
update error handling for OpenAI providers
2024-05-22 09:58:10 -05:00
Timothy Carambat
28eba636e9
Allow setting of safety thresholds for Gemini ( #1466 )
...
* Allow setting of safety thresholds for Gemini
* linting
2024-05-20 13:17:00 -05:00
Timothy Carambat
9ace0e67e6
Validate max_tokens is number ( #1445 )
2024-05-17 21:44:55 -07:00
Timothy Carambat
01cf2fed17
Make native embedder the fallback for all LLMs ( #1427 )
2024-05-16 17:25:05 -07:00
Sean Hatfield
826ef00da3
[FEAT] LiteLLM provider support ( #1424 )
...
* litellm LLM provider support
* fix lint error
* change import orders
fix issue with model retrieval
---------
Co-authored-by: Timothy Carambat <rambat1010@gmail.com>
2024-05-16 13:56:28 -07:00
Timothy Carambat
64b62290d7
Set gpt-4o as default for OpenAI ( #1391 )
2024-05-13 14:31:49 -07:00
Sean Hatfield
9ed2309757
[FEAT] Add API key support for Oobabooga Web UI ( #1354 )
...
* add api key support for oobabooga web ui
* dont expose API Key for TextWebGenUi
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-05-13 12:58:16 -07:00
Sean Hatfield
948ac8a3dd
[FIX] Validate messages schema for gemini provider ( #1351 )
...
validate messages schema for gemini provider
2024-05-10 17:33:25 -07:00
Sean Hatfield
0a6a9e40c1
[FIX] Add max tokens field to generic OpenAI LLM connector ( #1345 )
...
* add max tokens field to generic openai llm connector
* add max_tokens property to generic openai agent provider
2024-05-10 14:49:02 -07:00
Sean Hatfield
977a07db86
[FEAT] Text Generation Web UI LLM provider support ( #1279 )
...
* add text gen web ui LLM provider support
* update README
* README typo
* update TextWebUI display name
patch workspace<>model support for provider
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-05-08 11:56:30 -07:00
Sean Hatfield
fc77b46800
[FEAT] KoboldCPP LLM Support ( #1268 )
...
* koboldcpp LLM support
* update .env.examples for koboldcpp support
* update LLM preference order
update koboldcpp comments
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-05-02 12:12:44 -07:00
Sean Hatfield
3caebc47b4
[FEAT] Cohere LLM and embedder support ( #1233 )
...
* getChatCompletion working WIP streaming
* WIP
* working streaming WIP abort stream
* implement cohere embedder support
* remove inputType option from cohere embedder
* fix cohere LLM from not aborting stream when canceled by user
* Patch Cohere implemention
* add cohere to onboarding
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-05-02 10:35:50 -07:00
Sean Hatfield
9feaad79cc
[CHORE] Remove sendChat and streamChat in all LLM providers ( #1260 )
...
* remove sendChat and streamChat functions/references in all LLM providers
* remove unused imports
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-05-01 16:52:28 -07:00
Timothy Carambat
547d4859ef
Bump openai
package to latest ( #1234 )
...
* Bump `openai` package to latest
Tested all except localai
* bump LocalAI support with latest image
* add deprecation notice
* linting
2024-04-30 12:33:42 -07:00