Commit Graph

52 Commits

Author SHA1 Message Date
Sean Hatfield
7390bae6f6
Support DeepSeek (#2377)
* add deepseek support

* lint

* update deepseek context length

* add deepseek to onboarding

---------

Co-authored-by: Timothy Carambat <rambat1010@gmail.com>
2024-09-26 12:55:12 -07:00
Timothy Carambat
a30fa9b2ed
1943 add fireworksai support (#2300)
* Issue #1943: Add support for LLM provider - Fireworks AI

* Update UI selection boxes
Update base AI keys for future embedder support if needed
Add agent capabilites for FireworksAI

* class only return

---------

Co-authored-by: Aaron Van Doren <vandoren96+1@gmail.com>
2024-09-16 12:10:44 -07:00
Timothy Carambat
99f2c25b1c
Agent Context window + context window refactor. (#2126)
* Enable agent context windows to be accurate per provider:model

* Refactor model mapping to external file
Add token count to document length instead of char-count
refernce promptWindowLimit from AIProvider in central location

* remove unused imports
2024-08-15 12:13:28 -07:00
Timothy Carambat
38fc181238
Add multimodality support (#2001)
* Add multimodality support

* Add Bedrock, KoboldCpp,LocalAI,and TextWebGenUI multi-modal

* temp dev build

* patch bad import

* noscrolls for windows dnd

* noscrolls for windows dnd

* update README

* update README

* add multimodal check
2024-07-31 10:47:49 -07:00
Timothy Carambat
9366e69d88
Add AWS bedrock support for LLM + agents (#1935)
add AWS bedrock support for LLM + agents
2024-07-23 16:35:37 -07:00
Timothy Carambat
0b845fbb1c
Deprecate .isSafe moderation (#1790)
Add type defs to helpers
2024-06-28 15:32:30 -07:00
Sean Hatfield
e72fa8b370
[FEAT] Generic OpenAI embedding provider (#1664)
* implement generic openai embedding provider

* linting

* comment & description update for generic openai embedding provider

* fix privacy for generic

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-06-21 16:27:02 -07:00
Sean Hatfield
d29292ebd2
[FEAT] Add LiteLLM embedding provider support (#1579)
* add liteLLM embedding provider support

* update tooltip id

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-06-06 12:43:34 -07:00
Sean Hatfield
5bf4b4db58
[FEAT] Add support for Voyage AI embedder (#1401)
* add support for voyageai embedder

* remove unneeded import

* linting

* Add ENV examples
Update how chunks are processed for Voyage
use correct langchain import
Add data handling

---------

Co-authored-by: Timothy Carambat <rambat1010@gmail.com>
2024-05-19 13:20:23 -05:00
Timothy Carambat
cae6cee1b5
Do not go through LLM to embed when embedding documents (#1428) 2024-05-16 17:51:04 -07:00
Sean Hatfield
826ef00da3
[FEAT] LiteLLM provider support (#1424)
* litellm LLM provider support

* fix lint error

* change import orders
fix issue with model retrieval

---------

Co-authored-by: Timothy Carambat <rambat1010@gmail.com>
2024-05-16 13:56:28 -07:00
timothycarambat
a87978d1d9 Make LanceDB the vector database default provider in backend to prevent issues where somehow this key is not set by the user resulting in a Pinecone error even though they never said they wanted Pinecone to be their vector db 2024-05-13 12:22:53 -07:00
Sean Hatfield
977a07db86
[FEAT] Text Generation Web UI LLM provider support (#1279)
* add text gen web ui LLM provider support

* update README

* README typo

* update TextWebUI display name
patch workspace<>model support for provider

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-05-08 11:56:30 -07:00
Sean Hatfield
fc77b46800
[FEAT] KoboldCPP LLM Support (#1268)
* koboldcpp LLM support

* update .env.examples for koboldcpp support

* update LLM preference order
update koboldcpp comments

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-05-02 12:12:44 -07:00
Sean Hatfield
3caebc47b4
[FEAT] Cohere LLM and embedder support (#1233)
* getChatCompletion working WIP streaming

* WIP

* working streaming WIP abort stream

* implement cohere embedder support

* remove inputType option from cohere embedder

* fix cohere LLM from not aborting stream when canceled by user

* Patch Cohere implemention

* add cohere to onboarding

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-05-02 10:35:50 -07:00
Timothy Carambat
df17fbda36
Add generic OpenAI endpoint support (#1178)
* Add generic OpenAI endpoint support

* allow any input for model in case provider does not support models endpoint
2024-04-23 13:06:07 -07:00
Timothy Carambat
c65f890afc
Add LMStudio embedding endpoint support (#1141)
* Add LMStudio embedding endpoint support

* update alive path check for HEAD
remove commented JSX

* update comment
2024-04-19 15:36:07 -07:00
Timothy Carambat
6f52a2b729
Embedder download - fallback URL (#1056)
* Embedder download - fallback URL

* improve logging for native embedder
2024-04-06 11:49:15 -07:00
Timothy Carambat
94b58249a3
Enable per-workspace provider/model combination (#1042)
* Enable per-workspace provider/model combination

* cleanup

* remove resetWorkspaceChatModels and wipeWorkspaceModelPreference to prevent workspace from resetting model

* add space

---------

Co-authored-by: shatfield4 <seanhatfield5@gmail.com>
2024-04-05 10:58:36 -07:00
Sean Hatfield
0634013788
[FEAT] Groq LLM support (#865)
* Groq LLM support complete

* update useGetProvidersModels for groq models

* Add definiations
update comments and error log reports
add example envs

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-03-06 14:48:38 -08:00
Timothy Carambat
b64cb199f9
788 ollama embedder (#814)
* Add Ollama embedder model support calls

* update docs
2024-02-26 16:12:20 -08:00
Sean Hatfield
633f425206
[FEAT] OpenRouter integration (#784)
* WIP openrouter integration

* add OpenRouter options to onboarding flow and data handling

* add todo to fix headers for rankings

* OpenRouter LLM support complete

* Fix hanging response stream with OpenRouter
update tagline
update comment

* update timeout comment

* wait for first chunk to start timer

* sort OpenRouter models by organization

* uppercase first letter of organization

* sort grouped models by org

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-02-23 17:18:58 -08:00
Sean Hatfield
80ced5eba4
[FEAT] PerplexityAI Support (#778)
* add LLM support for perplexity

* update README & example env

* fix ENV keys in example env files

* slight changes for QA of perplexity support

* Update Perplexity AI name

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-02-22 12:48:57 -08:00
Timothy Carambat
2bc11d3f1a
Implement support for HuggingFace Inference Endpoints (#680) 2024-02-06 09:17:51 -08:00
Hakeem Abbas
5614e2ed30
feature: Integrate Astra as vectorDBProvider (#648)
* feature: Integrate Astra as vectorDBProvider

feature: Integrate Astra as vectorDBProvider

* Update .env.example

* Add env.example to docker example file
Update spellcheck fo Astra
Update Astra key for vector selection
Update order of AstraDB options
Resize Astra logo image to 330x330
Update methods of Astra to take in latest vectorDB params like TopN and more
Update Astra interface to support default methods and avoid crash errors from 404 collections
Update Astra interface to comply to max chunk insertion limitations
Update Astra interface to dynamically set dimensionality from chunk 0 size on creation

* reset workspaces

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-01-26 13:07:53 -08:00
Timothy Carambat
0df86699e7
feat: Add support for Zilliz Cloud by Milvus (#615)
* feat: Add support for Zilliz Cloud by Milvus

* update placeholder text
update data handling stmt

* update zilliz descriptor
2024-01-17 18:00:54 -08:00
Sean Hatfield
c2c8fe9756
add support for mistral api (#610)
* add support for mistral api

* update docs to show support for Mistral

* add default temp to all providers, suggest different results per provider

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-01-17 14:42:05 -08:00
Sean Hatfield
90df37582b
Per workspace model selection (#582)
* WIP model selection per workspace (migrations and openai saves properly

* revert OpenAiOption

* add support for models per workspace for anthropic, localAi, ollama, openAi, and togetherAi

* remove unneeded comments

* update logic for when LLMProvider is reset, reset Ai provider files with master

* remove frontend/api reset of workspace chat and move logic to updateENV
add postUpdate callbacks to envs

* set preferred model for chat on class instantiation

* remove extra param

* linting

* remove unused var

* refactor chat model selection on workspace

* linting

* add fallback for base path to localai models

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-01-17 12:59:25 -08:00
Shuyoou
6faa0efaa8
Issue #543 support milvus vector db (#579)
* issue #543 support milvus vector db

* migrate Milvus to use MilvusClient instead of ORM
normalize env setup for docs/implementation
feat: embedder model dimension added

* update comments

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-01-12 13:23:57 -08:00
Sean Hatfield
1d39b8a2ce
add Together AI LLM support (#560)
* add Together AI LLM support

* update readme to support together ai

* Patch togetherAI implementation

* add model sorting/option labels by organization for model selection

* linting + add data handling for TogetherAI

* change truthy statement
patch validLLMSelection method

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-01-10 12:35:30 -08:00
Timothy Carambat
e0a0a8976d
Add Ollama as LLM provider option (#494)
* Add support for Ollama as LLM provider
resolves #493
2023-12-27 17:21:47 -08:00
Timothy Carambat
24227e48a7
Add LLM support for Google Gemini-Pro (#492)
resolves #489
2023-12-27 17:08:03 -08:00
Timothy Carambat
8cc1455b72
feat: add support for variable chunk length (#415)
fix: cleanup code for embedding length clarify
resolves #388
2023-12-07 16:27:36 -08:00
Timothy Carambat
655ebd9479
[Feature] AnythingLLM use locally hosted Llama.cpp and GGUF files for inferencing (#413)
* Implement use of native embedder (all-Mini-L6-v2)
stop showing prisma queries during dev

* Add native embedder as an available embedder selection

* wrap model loader in try/catch

* print progress on download

* add built-in LLM support (expiermental)

* Update to progress output for embedder

* move embedder selection options to component

* saftey checks for modelfile

* update ref

* Hide selection when on hosted subdomain

* update documentation
hide localLlama when on hosted

* saftey checks for storage of models

* update dockerfile to pre-build Llama.cpp bindings

* update lockfile

* add langchain doc comment

* remove extraneous --no-metal option

* Show data handling for private LLM

* persist model in memory for N+1 chats

* update import
update dev comment on token model size

* update primary README

* chore: more readme updates and remove screenshots - too much to maintain, just use the app!

* remove screeshot link
2023-12-07 14:48:27 -08:00
Timothy Carambat
88cdd8c872
Add built-in embedding engine into AnythingLLM (#411)
* Implement use of native embedder (all-Mini-L6-v2)
stop showing prisma queries during dev

* Add native embedder as an available embedder selection

* wrap model loader in try/catch

* print progress on download

* Update to progress output for embedder

* move embedder selection options to component

* forgot import

* add Data privacy alert updates for local embedder
2023-12-06 10:36:22 -08:00
Sean Hatfield
5ad8a5f2d0
Allow use of any embedder for any llm/update data handling modal (#386)
* allow use of any embedder for any llm/update data handling modal

* Apply embedder override and fallback to OpenAI and Azure models

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2023-11-16 15:19:49 -08:00
Tobias Landenberger
a96a9d41a3
LocalAI for embeddings (#361)
* feature: add localAi as embedding provider

* chore: add LocalAI image

* chore: add localai embedding examples to docker .env.example

* update setting env
pull models from localai API

* update comments on embedder
Dont show cost estimation on UI

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2023-11-14 13:49:31 -08:00
Timothy Carambat
4bb99ab4bf
Support LocalAi as LLM provider by @tlandenberger (#373)
* feature: add LocalAI as llm provider

* update Onboarding/mgmt settings
Grab models from models endpoint for localai
merge with master

* update streaming for complete chunk streaming
update localAI LLM to be able to stream

* force schema on URL

---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
Co-authored-by: tlandenberger <tobiaslandenberger@gmail.com>
2023-11-14 12:31:44 -08:00
Francisco Bischoff
f499f1ba59
Using OpenAI API locally (#335)
* Using OpenAI API locally

* Infinite prompt input and compression implementation (#332)

* WIP on continuous prompt window summary

* wip

* Move chat out of VDB
simplify chat interface
normalize LLM model interface
have compression abstraction
Cleanup compressor
TODO: Anthropic stuff

* Implement compression for Anythropic
Fix lancedb sources

* cleanup vectorDBs and check that lance, chroma, and pinecone are returning valid metadata sources

* Resolve Weaviate citation sources not working with schema

* comment cleanup

* disable import on hosted instances (#339)

* disable import on hosted instances

* Update UI on disabled import/export

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>

* Add support for gpt-4-turbo 128K model (#340)

resolves #336
Add support for gpt-4-turbo 128K model

* 315 show citations based on relevancy score (#316)

* settings for similarity score threshold and prisma schema updated

* prisma schema migration for adding similarityScore setting

* WIP

* Min score default change

* added similarityThreshold checking for all vectordb providers

* linting

---------

Co-authored-by: shatfield4 <seanhatfield5@gmail.com>

* rename localai to lmstudio

* forgot files that were renamed

* normalize model interface

* add model and context window limits

* update LMStudio tagline

* Fully working LMStudio integration

---------
Co-authored-by: Francisco Bischoff <984592+franzbischoff@users.noreply.github.com>
Co-authored-by: Timothy Carambat <rambat1010@gmail.com>
Co-authored-by: Sean Hatfield <seanhatfield5@gmail.com>
2023-11-09 12:33:21 -08:00
timothycarambat
745d2aeaff fix import path 2023-10-30 15:49:29 -07:00
Timothy Carambat
5d56ab623b
Anthropic claude 2 support (#305)
* WIP Anythropic support for chat, chat and query w/context

* Add onboarding support for Anthropic

* cleanup

* fix Anthropic answer parsing
move embedding selector to general util
2023-10-30 15:44:03 -07:00
Timothy Carambat
cf0b24af02
Add Qdrant support for embedding, chat, and conversation (#192)
* Add Qdrant support for embedding, chat, and conversation

* Change comments
2023-08-15 15:26:44 -07:00
Timothy Carambat
f3a6147ffd
Add support for Weaviate VectorDB (#181) 2023-08-08 18:02:30 -07:00
Timothy Carambat
1f29cec918
Multiple LLM Support framework + AzureOpenAI Support (#180)
* Remove LangchainJS for chat support chaining
Implement runtime LLM selection
Implement AzureOpenAI Support for LLM + Emebedding
WIP on frontend
Update env to reflect the new fields

* Remove LangchainJS for chat support chaining
Implement runtime LLM selection
Implement AzureOpenAI Support for LLM + Emebedding
WIP on frontend
Update env to reflect the new fields

* Replace keys with LLM Selection in settings modal
Enforce checks for new ENVs depending on LLM selection
2023-08-04 14:56:27 -07:00
Timothy Carambat
8929d96ed0
Move OpenAI api calls into its own interface/Class (#162)
* Move OpenAI api calls into its own interface/Class
move curate sources to be specific for each vectorDBs response for chat/query

* remove comment
2023-07-28 12:05:38 -07:00
Timothy Carambat
0a2f837fb2
improve citations to show all text chunks referred and expand the citation to view full referenced text (#161)
* improve citations to show all text chunks referred and expand the citation to view full referenced text
chunk text of same document together

* remove debug
2023-07-27 22:33:27 -07:00
Timothy Carambat
c1deca4928
[Fork] Batch embed by jwaltz (#153)
* refactor: convert chunk embedding to one API call

* chore: lint

* fix chroma for batch and single vectorization of text

* Fix LanceDB multi and single vectorization

* Fix pinecone for single and multiple embeddings

---------

Co-authored-by: Jonathan Waltz <volcanicislander@gmail.com>
2023-07-20 12:05:23 -07:00
Timothy Carambat
9d0becb2ee
Add chat/converstaion mode as the default chat mode for all Vector Databases (#112)
* Add chat/converstaion mode as the default chat mode
Show menu for toggling options for chat/query/reset command
Show chat status below input
resolves #61

* remove console logs
2023-06-26 15:08:47 -07:00
timothycarambat
f0fd91db6f Reorg some files for clarity 2023-06-08 18:58:26 -07:00
Timothy Carambat
ad15e1f9b6
Lancedb support (#6)
* add start of lanceDB support

* lancedb initial support

* add null method for deletion of documents from namespace since LanceDB does not support
show warning modal on frontend for this

* update .env.example and lancedb methods for sourcing

* change export method

* update readme
2023-06-08 18:40:29 -07:00