Commit Graph

63 Commits

Author SHA1 Message Date
Sean Hatfield
826ef00da3
[FEAT] LiteLLM provider support (#1424)
* litellm LLM provider support

* fix lint error

* change import orders
fix issue with model retrieval

---------

Co-authored-by: Timothy Carambat <rambat1010@gmail.com>
2024-05-16 13:56:28 -07:00
Timothy Carambat
b6be43be95
Add Speech-to-text and Text-to-speech providers (#1394)
* Add Speech-to-text and Text-to-speech providers

* add files and update comment

* update comments

* patch: bad playerRef check
2024-05-14 11:57:21 -07:00
Sean Hatfield
9ed2309757
[FEAT] Add API key support for Oobabooga Web UI (#1354)
* add api key support for oobabooga web ui

* dont expose API Key for TextWebGenUi

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-05-13 12:58:16 -07:00
Timothy Carambat
7b18a36288
prevent accidental lockout from restricted chars in single pass mode (#1352)
* prevent accidental lockout from restrict chars in single pass mode

* update error message
2024-05-10 17:29:49 -07:00
Sean Hatfield
0a6a9e40c1
[FIX] Add max tokens field to generic OpenAI LLM connector (#1345)
* add max tokens field to generic openai llm connector

* add max_tokens property to generic openai agent provider
2024-05-10 14:49:02 -07:00
Sean Hatfield
977a07db86
[FEAT] Text Generation Web UI LLM provider support (#1279)
* add text gen web ui LLM provider support

* update README

* README typo

* update TextWebUI display name
patch workspace<>model support for provider

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-05-08 11:56:30 -07:00
Sean Hatfield
fc77b46800
[FEAT] KoboldCPP LLM Support (#1268)
* koboldcpp LLM support

* update .env.examples for koboldcpp support

* update LLM preference order
update koboldcpp comments

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-05-02 12:12:44 -07:00
Sean Hatfield
3caebc47b4
[FEAT] Cohere LLM and embedder support (#1233)
* getChatCompletion working WIP streaming

* WIP

* working streaming WIP abort stream

* implement cohere embedder support

* remove inputType option from cohere embedder

* fix cohere LLM from not aborting stream when canceled by user

* Patch Cohere implemention

* add cohere to onboarding

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-05-02 10:35:50 -07:00
Timothy Carambat
df17fbda36
Add generic OpenAI endpoint support (#1178)
* Add generic OpenAI endpoint support

* allow any input for model in case provider does not support models endpoint
2024-04-23 13:06:07 -07:00
Timothy Carambat
c65f890afc
Add LMStudio embedding endpoint support (#1141)
* Add LMStudio embedding endpoint support

* update alive path check for HEAD
remove commented JSX

* update comment
2024-04-19 15:36:07 -07:00
Timothy Carambat
58b744771f
Add support for Gemini-1.5 Pro (#1134)
* Add support for Gemini-1.5 Pro
bump @google/generative-ai pkg
Toggle apiVersion if beta model selected
resolves #1109

* update API messages due to package change
2024-04-19 08:59:46 -07:00
Timothy Carambat
a5bb77f97a
Agent support for @agent default agent inside workspace chat (#1093)
V1 of agent support via built-in `@agent` that can be invoked alongside normal workspace RAG chat.
2024-04-16 10:50:10 -07:00
Timothy Carambat
d54e1c1f2d
expand support for non-US azure deployments (#1080)
* expand support for non-US azure deployments

* update conditional
2024-04-10 09:34:14 -07:00
Timothy Carambat
94b58249a3
Enable per-workspace provider/model combination (#1042)
* Enable per-workspace provider/model combination

* cleanup

* remove resetWorkspaceChatModels and wipeWorkspaceModelPreference to prevent workspace from resetting model

* add space

---------

Co-authored-by: shatfield4 <seanhatfield5@gmail.com>
2024-04-05 10:58:36 -07:00
Ivan Skodje
e9199bac12
[FEAT] Check port access in docker before showing a default error (#961)
* [FEAT] Added port checks in updateENV.validDockerizedUrl to prevent docker from assuming it cannot access localhost URLs

* [CHORE] Updated error message to include Linux URL

* Patch port checking for general loopbacks

* typo

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-04-02 10:34:50 -07:00
timothycarambat
bfedfebfab security: force sanitize env string set by user 2024-03-29 13:03:05 -07:00
Timothy Carambat
1135853740
Patch LMStudio Inference server bug integration (#957) 2024-03-22 14:39:30 -07:00
Timothy Carambat
7e7e957e32
Enable privacy and handling to be reviewed and modified (#910) 2024-03-14 16:56:15 -07:00
Timothy Carambat
0ada882991
Support external transcription providers (#909)
* Support External Transcription providers

* patch files

* update docs

* fix return data
2024-03-14 15:43:26 -07:00
Sean Hatfield
ac0e62d490
[FEAT] Anthropic Haiku model support (#901)
add Haiku model support
2024-03-13 17:32:02 -07:00
Sean Hatfield
e0d5d8039a
[FEAT] Claude 3 support and implement new version of Anthropic SDK (#863)
* implement new version of anthropic sdk and support new models

* remove handleAnthropicStream and move to handleStream inside anthropic provider

* update useGetProvidersModels for new anthropic models
2024-03-06 14:57:47 -08:00
Sean Hatfield
0634013788
[FEAT] Groq LLM support (#865)
* Groq LLM support complete

* update useGetProvidersModels for groq models

* Add definiations
update comments and error log reports
add example envs

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-03-06 14:48:38 -08:00
Timothy Carambat
b64cb199f9
788 ollama embedder (#814)
* Add Ollama embedder model support calls

* update docs
2024-02-26 16:12:20 -08:00
Sean Hatfield
633f425206
[FEAT] OpenRouter integration (#784)
* WIP openrouter integration

* add OpenRouter options to onboarding flow and data handling

* add todo to fix headers for rankings

* OpenRouter LLM support complete

* Fix hanging response stream with OpenRouter
update tagline
update comment

* update timeout comment

* wait for first chunk to start timer

* sort OpenRouter models by organization

* uppercase first letter of organization

* sort grouped models by org

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-02-23 17:18:58 -08:00
Sean Hatfield
80ced5eba4
[FEAT] PerplexityAI Support (#778)
* add LLM support for perplexity

* update README & example env

* fix ENV keys in example env files

* slight changes for QA of perplexity support

* Update Perplexity AI name

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-02-22 12:48:57 -08:00
Sean Hatfield
d789920a19
[FEAT] Automated audit logging (#667)
* WIP event logging - new table for events and new settings view for viewing

* WIP add logging

* UI for log rows

* rename files to Logging to prevent getting gitignore

* add metadata for all logging events and colored badges in logs page

* remove unneeded comment

* cleanup namespace for logging

* clean up backend calls

* update logging to show to => from settings changes

* add logging for invitations, created, deleted, and accepted

* add logging for user created, updated, suspended, or removed

* add logging for workspace deleted

* add logging for chat logs exported

* add logging for API keys, LLM, embedder, vector db, embed chat, and reset button

* modify event logs

* update to event log types

* simplify rendering of event badges

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-02-06 15:21:40 -08:00
Timothy Carambat
2bc11d3f1a
Implement support for HuggingFace Inference Endpoints (#680) 2024-02-06 09:17:51 -08:00
Hakeem Abbas
5614e2ed30
feature: Integrate Astra as vectorDBProvider (#648)
* feature: Integrate Astra as vectorDBProvider

feature: Integrate Astra as vectorDBProvider

* Update .env.example

* Add env.example to docker example file
Update spellcheck fo Astra
Update Astra key for vector selection
Update order of AstraDB options
Resize Astra logo image to 330x330
Update methods of Astra to take in latest vectorDB params like TopN and more
Update Astra interface to support default methods and avoid crash errors from 404 collections
Update Astra interface to comply to max chunk insertion limitations
Update Astra interface to dynamically set dimensionality from chunk 0 size on creation

* reset workspaces

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-01-26 13:07:53 -08:00
Sean Hatfield
2f3db0e63a
[FEAT] support pinecone serverless (#639)
* migrate pinecone package to latest version and migrate pinecone vectordb provider class

* remove pinecone environment name env variable and update docs to reflect removal & serverless support complete

* migrate query for pinecone db

* typo in log

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-01-22 16:41:20 -08:00
Timothy Carambat
44eb1e9ab0
617 persist special env keys (#624)
* add support for exporting to json and csv in workspace chats

* safety encode URL options

* remove message about openai fine tuning on export success

* all defaults to jsonl

* Persist special env keys on updates

---------

Co-authored-by: shatfield4 <seanhatfield5@gmail.com>
2024-01-18 18:13:24 -08:00
Timothy Carambat
0df86699e7
feat: Add support for Zilliz Cloud by Milvus (#615)
* feat: Add support for Zilliz Cloud by Milvus

* update placeholder text
update data handling stmt

* update zilliz descriptor
2024-01-17 18:00:54 -08:00
Sean Hatfield
3fe7a25759
add token context limit for native llm settings (#614)
Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-01-17 16:25:30 -08:00
Sean Hatfield
c2c8fe9756
add support for mistral api (#610)
* add support for mistral api

* update docs to show support for Mistral

* add default temp to all providers, suggest different results per provider

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-01-17 14:42:05 -08:00
Sean Hatfield
90df37582b
Per workspace model selection (#582)
* WIP model selection per workspace (migrations and openai saves properly

* revert OpenAiOption

* add support for models per workspace for anthropic, localAi, ollama, openAi, and togetherAi

* remove unneeded comments

* update logic for when LLMProvider is reset, reset Ai provider files with master

* remove frontend/api reset of workspace chat and move logic to updateENV
add postUpdate callbacks to envs

* set preferred model for chat on class instantiation

* remove extra param

* linting

* remove unused var

* refactor chat model selection on workspace

* linting

* add fallback for base path to localai models

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-01-17 12:59:25 -08:00
Shuyoou
6faa0efaa8
Issue #543 support milvus vector db (#579)
* issue #543 support milvus vector db

* migrate Milvus to use MilvusClient instead of ORM
normalize env setup for docs/implementation
feat: embedder model dimension added

* update comments

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-01-12 13:23:57 -08:00
Sean Hatfield
1d39b8a2ce
add Together AI LLM support (#560)
* add Together AI LLM support

* update readme to support together ai

* Patch togetherAI implementation

* add model sorting/option labels by organization for model selection

* linting + add data handling for TogetherAI

* change truthy statement
patch validLLMSelection method

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-01-10 12:35:30 -08:00
Timothy Carambat
ceadc8d467
patch gpt-4-turbo token allowance for Azure model (#514) 2024-01-02 12:49:48 -08:00
Timothy Carambat
d7481671ba
Prevent external service localhost question (#497)
* Prevent external service localhost question

* add 0.0.0.0 to docker-invalid URL

* clarify hint
2023-12-28 10:47:02 -08:00
Timothy Carambat
e0a0a8976d
Add Ollama as LLM provider option (#494)
* Add support for Ollama as LLM provider
resolves #493
2023-12-27 17:21:47 -08:00
Timothy Carambat
24227e48a7
Add LLM support for Google Gemini-Pro (#492)
resolves #489
2023-12-27 17:08:03 -08:00
timothycarambat
7bee849c65 chore: Force VectorCache to always be on;
update file picker spacing for attributes
2023-12-20 10:45:03 -08:00
Timothy Carambat
8cc1455b72
feat: add support for variable chunk length (#415)
fix: cleanup code for embedding length clarify
resolves #388
2023-12-07 16:27:36 -08:00
Timothy Carambat
655ebd9479
[Feature] AnythingLLM use locally hosted Llama.cpp and GGUF files for inferencing (#413)
* Implement use of native embedder (all-Mini-L6-v2)
stop showing prisma queries during dev

* Add native embedder as an available embedder selection

* wrap model loader in try/catch

* print progress on download

* add built-in LLM support (expiermental)

* Update to progress output for embedder

* move embedder selection options to component

* saftey checks for modelfile

* update ref

* Hide selection when on hosted subdomain

* update documentation
hide localLlama when on hosted

* saftey checks for storage of models

* update dockerfile to pre-build Llama.cpp bindings

* update lockfile

* add langchain doc comment

* remove extraneous --no-metal option

* Show data handling for private LLM

* persist model in memory for N+1 chats

* update import
update dev comment on token model size

* update primary README

* chore: more readme updates and remove screenshots - too much to maintain, just use the app!

* remove screeshot link
2023-12-07 14:48:27 -08:00
Timothy Carambat
88cdd8c872
Add built-in embedding engine into AnythingLLM (#411)
* Implement use of native embedder (all-Mini-L6-v2)
stop showing prisma queries during dev

* Add native embedder as an available embedder selection

* wrap model loader in try/catch

* print progress on download

* Update to progress output for embedder

* move embedder selection options to component

* forgot import

* add Data privacy alert updates for local embedder
2023-12-06 10:36:22 -08:00
pritchey
732d07829f
401-Password Complexity Check Capability (#402)
* Added improved password complexity checking capability.

* Move password complexity checker as User.util
dynamically import required libraries depending on code execution flow
lint

* Ensure persistence of password requirements on restarts via env-dump
Copy example schema to docker env as well

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2023-12-05 09:13:06 -08:00
Timothy Carambat
6fa8b0ce93
Add API key option to LocalAI (#407)
* Add API key option to LocalAI

* add api key for model dropdown selector
2023-12-04 08:38:15 -08:00
Tobias Landenberger
a96a9d41a3
LocalAI for embeddings (#361)
* feature: add localAi as embedding provider

* chore: add LocalAI image

* chore: add localai embedding examples to docker .env.example

* update setting env
pull models from localai API

* update comments on embedder
Dont show cost estimation on UI

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2023-11-14 13:49:31 -08:00
Timothy Carambat
4bb99ab4bf
Support LocalAi as LLM provider by @tlandenberger (#373)
* feature: add LocalAI as llm provider

* update Onboarding/mgmt settings
Grab models from models endpoint for localai
merge with master

* update streaming for complete chunk streaming
update localAI LLM to be able to stream

* force schema on URL

---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
Co-authored-by: tlandenberger <tobiaslandenberger@gmail.com>
2023-11-14 12:31:44 -08:00
Francisco Bischoff
f499f1ba59
Using OpenAI API locally (#335)
* Using OpenAI API locally

* Infinite prompt input and compression implementation (#332)

* WIP on continuous prompt window summary

* wip

* Move chat out of VDB
simplify chat interface
normalize LLM model interface
have compression abstraction
Cleanup compressor
TODO: Anthropic stuff

* Implement compression for Anythropic
Fix lancedb sources

* cleanup vectorDBs and check that lance, chroma, and pinecone are returning valid metadata sources

* Resolve Weaviate citation sources not working with schema

* comment cleanup

* disable import on hosted instances (#339)

* disable import on hosted instances

* Update UI on disabled import/export

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>

* Add support for gpt-4-turbo 128K model (#340)

resolves #336
Add support for gpt-4-turbo 128K model

* 315 show citations based on relevancy score (#316)

* settings for similarity score threshold and prisma schema updated

* prisma schema migration for adding similarityScore setting

* WIP

* Min score default change

* added similarityThreshold checking for all vectordb providers

* linting

---------

Co-authored-by: shatfield4 <seanhatfield5@gmail.com>

* rename localai to lmstudio

* forgot files that were renamed

* normalize model interface

* add model and context window limits

* update LMStudio tagline

* Fully working LMStudio integration

---------
Co-authored-by: Francisco Bischoff <984592+franzbischoff@users.noreply.github.com>
Co-authored-by: Timothy Carambat <rambat1010@gmail.com>
Co-authored-by: Sean Hatfield <seanhatfield5@gmail.com>
2023-11-09 12:33:21 -08:00
Timothy Carambat
be9d8b0397
Infinite prompt input and compression implementation (#332)
* WIP on continuous prompt window summary

* wip

* Move chat out of VDB
simplify chat interface
normalize LLM model interface
have compression abstraction
Cleanup compressor
TODO: Anthropic stuff

* Implement compression for Anythropic
Fix lancedb sources

* cleanup vectorDBs and check that lance, chroma, and pinecone are returning valid metadata sources

* Resolve Weaviate citation sources not working with schema

* comment cleanup
2023-11-06 13:13:53 -08:00