Commit Graph

19 Commits

Author SHA1 Message Date
Sean Hatfield
6674e5aab8
Support free-form input for workspace model for providers with no /models endpoint (#2397)
* support generic openai workspace model

* Update UI for free form input for some providers

---------

Co-authored-by: Timothy Carambat <rambat1010@gmail.com>
2024-10-15 15:24:44 -07:00
Timothy Carambat
b4651aff35
Support gpt-4o for Azure deployments (#2182) 2024-08-26 14:35:42 -07:00
Timothy Carambat
99f2c25b1c
Agent Context window + context window refactor. (#2126)
* Enable agent context windows to be accurate per provider:model

* Refactor model mapping to external file
Add token count to document length instead of char-count
refernce promptWindowLimit from AIProvider in central location

* remove unused imports
2024-08-15 12:13:28 -07:00
Timothy Carambat
0b845fbb1c
Deprecate .isSafe moderation (#1790)
Add type defs to helpers
2024-06-28 15:32:30 -07:00
Timothy Carambat
01cf2fed17
Make native embedder the fallback for all LLMs (#1427) 2024-05-16 17:25:05 -07:00
Sean Hatfield
9feaad79cc
[CHORE] Remove sendChat and streamChat in all LLM providers (#1260)
* remove sendChat and streamChat functions/references in all LLM providers

* remove unused imports

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-05-01 16:52:28 -07:00
Timothy Carambat
0e46a11cb6
Stop generation button during stream-response (#892)
* Stop generation button during stream-response

* add custom stop icon

* add stop to thread chats
2024-03-12 15:21:27 -07:00
Timothy Carambat
c59ab9da0a
Refactor LLM chat backend (#717)
* refactor stream/chat/embed-stram to be a single execution logic path so that it is easier to maintain and build upon

* no thread in sync chat since only api uses it
adjust import locations
2024-02-14 12:32:07 -08:00
Timothy Carambat
aca5940650
Refactor handleStream to LLM Classes (#685) 2024-02-07 08:15:14 -08:00
Sean Hatfield
c2c8fe9756
add support for mistral api (#610)
* add support for mistral api

* update docs to show support for Mistral

* add default temp to all providers, suggest different results per provider

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-01-17 14:42:05 -08:00
Sean Hatfield
90df37582b
Per workspace model selection (#582)
* WIP model selection per workspace (migrations and openai saves properly

* revert OpenAiOption

* add support for models per workspace for anthropic, localAi, ollama, openAi, and togetherAi

* remove unneeded comments

* update logic for when LLMProvider is reset, reset Ai provider files with master

* remove frontend/api reset of workspace chat and move logic to updateENV
add postUpdate callbacks to envs

* set preferred model for chat on class instantiation

* remove extra param

* linting

* remove unused var

* refactor chat model selection on workspace

* linting

* add fallback for base path to localai models

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-01-17 12:59:25 -08:00
Timothy Carambat
75dd86967c
Implement AzureOpenAI model chat streaming (#518)
resolves #492
2024-01-03 16:25:39 -08:00
Timothy Carambat
6d5968bf7e
Llm chore cleanup (#501)
* move internal functions to private in class
simplify lc message convertor

* Fix hanging Context text when none is present
2023-12-28 14:42:34 -08:00
Sean Hatfield
5ad8a5f2d0
Allow use of any embedder for any llm/update data handling modal (#386)
* allow use of any embedder for any llm/update data handling modal

* Apply embedder override and fallback to OpenAI and Azure models

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2023-11-16 15:19:49 -08:00
Timothy Carambat
c22c50cca8
Enable chat streaming for LLMs (#354)
* [Draft] Enable chat streaming for LLMs

* stream only, move sendChat to deprecated

* Update TODO deprecation comments
update console output color for streaming disabled
2023-11-13 15:07:30 -08:00
Timothy Carambat
be9d8b0397
Infinite prompt input and compression implementation (#332)
* WIP on continuous prompt window summary

* wip

* Move chat out of VDB
simplify chat interface
normalize LLM model interface
have compression abstraction
Cleanup compressor
TODO: Anthropic stuff

* Implement compression for Anythropic
Fix lancedb sources

* cleanup vectorDBs and check that lance, chroma, and pinecone are returning valid metadata sources

* Resolve Weaviate citation sources not working with schema

* comment cleanup
2023-11-06 13:13:53 -08:00
Timothy Carambat
5d56ab623b
Anthropic claude 2 support (#305)
* WIP Anythropic support for chat, chat and query w/context

* Add onboarding support for Anthropic

* cleanup

* fix Anthropic answer parsing
move embedding selector to general util
2023-10-30 15:44:03 -07:00
Timothy Carambat
2a28415de4
Make openAI Azure embedding requests run concurrently to avoid input limits per call (#211)
resolves #184
2023-08-22 10:23:29 -07:00
Timothy Carambat
1f29cec918
Multiple LLM Support framework + AzureOpenAI Support (#180)
* Remove LangchainJS for chat support chaining
Implement runtime LLM selection
Implement AzureOpenAI Support for LLM + Emebedding
WIP on frontend
Update env to reflect the new fields

* Remove LangchainJS for chat support chaining
Implement runtime LLM selection
Implement AzureOpenAI Support for LLM + Emebedding
WIP on frontend
Update env to reflect the new fields

* Replace keys with LLM Selection in settings modal
Enforce checks for new ENVs depending on LLM selection
2023-08-04 14:56:27 -07:00