Commit Graph

59 Commits

Author SHA1 Message Date
Sean Hatfield
c24b79c9d1
[FEAT] Bing Search API web search provider (#1519)
implement bing search engine for agents
2024-05-23 16:49:30 -07:00
Sean Hatfield
6a2d7aca28
[FEAT] Custom login screen icon + custom app name (#1500)
* implement custom icon on login screen for single & multi user + custom app name feature

* hide field when not relevant

* set customApp name

* show original anythingllm login logo unless custom logo is set

* nit-picks

* remove console log

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-05-23 14:14:53 -07:00
Timothy Carambat
28eba636e9
Allow setting of safety thresholds for Gemini (#1466)
* Allow setting of safety thresholds for Gemini

* linting
2024-05-20 13:17:00 -05:00
Sean Hatfield
5bf4b4db58
[FEAT] Add support for Voyage AI embedder (#1401)
* add support for voyageai embedder

* remove unneeded import

* linting

* Add ENV examples
Update how chunks are processed for Voyage
use correct langchain import
Add data handling

---------

Co-authored-by: Timothy Carambat <rambat1010@gmail.com>
2024-05-19 13:20:23 -05:00
Timothy Carambat
1a5aacb001
Support multi-model whispers (#1444) 2024-05-17 21:31:29 -07:00
Sean Hatfield
826ef00da3
[FEAT] LiteLLM provider support (#1424)
* litellm LLM provider support

* fix lint error

* change import orders
fix issue with model retrieval

---------

Co-authored-by: Timothy Carambat <rambat1010@gmail.com>
2024-05-16 13:56:28 -07:00
Timothy Carambat
15cf921616
Support SQL Agent skill (#1411)
* Support SQL Agent skill

* add MSSQL agent connector

* Add frontend to agent skills
remove FAKE_DB mock
reset skills to pickup child-skill dynamically

* add prompt examples for tools on untooled

* add better logging on SQL agents

* Wipe toolruns on each chat relay so tools can be used within the same session

* update comments
2024-05-16 10:38:21 -07:00
Timothy Carambat
b6be43be95
Add Speech-to-text and Text-to-speech providers (#1394)
* Add Speech-to-text and Text-to-speech providers

* add files and update comment

* update comments

* patch: bad playerRef check
2024-05-14 11:57:21 -07:00
Timothy Carambat
64b62290d7
Set gpt-4o as default for OpenAI (#1391) 2024-05-13 14:31:49 -07:00
Sean Hatfield
9ed2309757
[FEAT] Add API key support for Oobabooga Web UI (#1354)
* add api key support for oobabooga web ui

* dont expose API Key for TextWebGenUi

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-05-13 12:58:16 -07:00
Sean Hatfield
0a6a9e40c1
[FIX] Add max tokens field to generic OpenAI LLM connector (#1345)
* add max tokens field to generic openai llm connector

* add max_tokens property to generic openai agent provider
2024-05-10 14:49:02 -07:00
Sean Hatfield
977a07db86
[FEAT] Text Generation Web UI LLM provider support (#1279)
* add text gen web ui LLM provider support

* update README

* README typo

* update TextWebUI display name
patch workspace<>model support for provider

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-05-08 11:56:30 -07:00
Sean Hatfield
fc77b46800
[FEAT] KoboldCPP LLM Support (#1268)
* koboldcpp LLM support

* update .env.examples for koboldcpp support

* update LLM preference order
update koboldcpp comments

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-05-02 12:12:44 -07:00
Sean Hatfield
3caebc47b4
[FEAT] Cohere LLM and embedder support (#1233)
* getChatCompletion working WIP streaming

* WIP

* working streaming WIP abort stream

* implement cohere embedder support

* remove inputType option from cohere embedder

* fix cohere LLM from not aborting stream when canceled by user

* Patch Cohere implemention

* add cohere to onboarding

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-05-02 10:35:50 -07:00
Timothy Carambat
df17fbda36
Add generic OpenAI endpoint support (#1178)
* Add generic OpenAI endpoint support

* allow any input for model in case provider does not support models endpoint
2024-04-23 13:06:07 -07:00
Timothy Carambat
b3c7c002db
Warn user when changing embedder with current progress already (#1135)
* Warn user when changing embedder with current progress already

* update comment
2024-04-19 09:51:58 -07:00
Timothy Carambat
a5bb77f97a
Agent support for @agent default agent inside workspace chat (#1093)
V1 of agent support via built-in `@agent` that can be invoked alongside normal workspace RAG chat.
2024-04-16 10:50:10 -07:00
Timothy Carambat
ce98ff4653
Enable customization of chunk length and overlap (#1059)
* Enable customization of chunk length and overlap

* fix onboarding link
show max limit in UI and prevent overlap >= chunk size
2024-04-06 16:38:07 -07:00
Timothy Carambat
94b58249a3
Enable per-workspace provider/model combination (#1042)
* Enable per-workspace provider/model combination

* cleanup

* remove resetWorkspaceChatModels and wipeWorkspaceModelPreference to prevent workspace from resetting model

* add space

---------

Co-authored-by: shatfield4 <seanhatfield5@gmail.com>
2024-04-05 10:58:36 -07:00
Timothy Carambat
8c7379cda1
persist provider keys between toggle providers (#1041) 2024-04-04 13:47:33 -07:00
timothycarambat
49f30e051c security: patch footer icon self-xss from privledged user 2024-03-29 13:39:11 -07:00
Timothy Carambat
52fac84422
Patch ability to update multi-user-flag once set (#993)
* Patch ability to update multi-user-flag once set

* update logo function to safe update key values
2024-03-29 10:56:32 -07:00
Timothy Carambat
1135853740
Patch LMStudio Inference server bug integration (#957) 2024-03-22 14:39:30 -07:00
Timothy Carambat
7e7e957e32
Enable privacy and handling to be reviewed and modified (#910) 2024-03-14 16:56:15 -07:00
Timothy Carambat
0ada882991
Support external transcription providers (#909)
* Support External Transcription providers

* patch files

* update docs

* fix return data
2024-03-14 15:43:26 -07:00
Sean Hatfield
0634013788
[FEAT] Groq LLM support (#865)
* Groq LLM support complete

* update useGetProvidersModels for groq models

* Add definiations
update comments and error log reports
add example envs

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-03-06 14:48:38 -08:00
Sean Hatfield
633f425206
[FEAT] OpenRouter integration (#784)
* WIP openrouter integration

* add OpenRouter options to onboarding flow and data handling

* add todo to fix headers for rankings

* OpenRouter LLM support complete

* Fix hanging response stream with OpenRouter
update tagline
update comment

* update timeout comment

* wait for first chunk to start timer

* sort OpenRouter models by organization

* uppercase first letter of organization

* sort grouped models by org

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-02-23 17:18:58 -08:00
Sean Hatfield
80ced5eba4
[FEAT] PerplexityAI Support (#778)
* add LLM support for perplexity

* update README & example env

* fix ENV keys in example env files

* slight changes for QA of perplexity support

* Update Perplexity AI name

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-02-22 12:48:57 -08:00
Sean Hatfield
17c1913ccc
[FEAT]: Allow user to set support email (#726)
* implement custom support email for usermenu support button

* small refactor

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-02-19 10:30:41 -08:00
Sean Hatfield
b985524901
[FEAT] Customizable footer icon links in Appearance Settings (#694)
* WIP custom footer icons

* UI for updating footer icons complete and backend to save/modify

* add backend for unprotected footer fetch

* break out footer into separate component and render footer items using a cache for 1 hour

* wip review

* refactor & cleanup

* Optimize footer form component
Optimize caching for footer icons
Add validation on SystemSetting upserts
Normalize fallback items for footer_data

* Adjust max icons to 3

* fix success message on remove

* fix success message on remove

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-02-08 12:17:01 -08:00
Timothy Carambat
2bc11d3f1a
Implement support for HuggingFace Inference Endpoints (#680) 2024-02-06 09:17:51 -08:00
Hakeem Abbas
5614e2ed30
feature: Integrate Astra as vectorDBProvider (#648)
* feature: Integrate Astra as vectorDBProvider

feature: Integrate Astra as vectorDBProvider

* Update .env.example

* Add env.example to docker example file
Update spellcheck fo Astra
Update Astra key for vector selection
Update order of AstraDB options
Resize Astra logo image to 330x330
Update methods of Astra to take in latest vectorDB params like TopN and more
Update Astra interface to support default methods and avoid crash errors from 404 collections
Update Astra interface to comply to max chunk insertion limitations
Update Astra interface to dynamically set dimensionality from chunk 0 size on creation

* reset workspaces

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-01-26 13:07:53 -08:00
Sean Hatfield
2f3db0e63a
[FEAT] support pinecone serverless (#639)
* migrate pinecone package to latest version and migrate pinecone vectordb provider class

* remove pinecone environment name env variable and update docs to reflect removal & serverless support complete

* migrate query for pinecone db

* typo in log

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-01-22 16:41:20 -08:00
Timothy Carambat
0df86699e7
feat: Add support for Zilliz Cloud by Milvus (#615)
* feat: Add support for Zilliz Cloud by Milvus

* update placeholder text
update data handling stmt

* update zilliz descriptor
2024-01-17 18:00:54 -08:00
Sean Hatfield
3fe7a25759
add token context limit for native llm settings (#614)
Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-01-17 16:25:30 -08:00
Sean Hatfield
c2c8fe9756
add support for mistral api (#610)
* add support for mistral api

* update docs to show support for Mistral

* add default temp to all providers, suggest different results per provider

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-01-17 14:42:05 -08:00
Shuyoou
6faa0efaa8
Issue #543 support milvus vector db (#579)
* issue #543 support milvus vector db

* migrate Milvus to use MilvusClient instead of ORM
normalize env setup for docs/implementation
feat: embedder model dimension added

* update comments

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-01-12 13:23:57 -08:00
Sean Hatfield
1d39b8a2ce
add Together AI LLM support (#560)
* add Together AI LLM support

* update readme to support together ai

* Patch togetherAI implementation

* add model sorting/option labels by organization for model selection

* linting + add data handling for TogetherAI

* change truthy statement
patch validLLMSelection method

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-01-10 12:35:30 -08:00
Timothy Carambat
e0a0a8976d
Add Ollama as LLM provider option (#494)
* Add support for Ollama as LLM provider
resolves #493
2023-12-27 17:21:47 -08:00
Timothy Carambat
24227e48a7
Add LLM support for Google Gemini-Pro (#492)
resolves #489
2023-12-27 17:08:03 -08:00
Timothy Carambat
cba66150d7
patch: API key to localai service calls (#421)
connect #417
2023-12-11 14:18:28 -08:00
Timothy Carambat
8cc1455b72
feat: add support for variable chunk length (#415)
fix: cleanup code for embedding length clarify
resolves #388
2023-12-07 16:27:36 -08:00
Timothy Carambat
655ebd9479
[Feature] AnythingLLM use locally hosted Llama.cpp and GGUF files for inferencing (#413)
* Implement use of native embedder (all-Mini-L6-v2)
stop showing prisma queries during dev

* Add native embedder as an available embedder selection

* wrap model loader in try/catch

* print progress on download

* add built-in LLM support (expiermental)

* Update to progress output for embedder

* move embedder selection options to component

* saftey checks for modelfile

* update ref

* Hide selection when on hosted subdomain

* update documentation
hide localLlama when on hosted

* saftey checks for storage of models

* update dockerfile to pre-build Llama.cpp bindings

* update lockfile

* add langchain doc comment

* remove extraneous --no-metal option

* Show data handling for private LLM

* persist model in memory for N+1 chats

* update import
update dev comment on token model size

* update primary README

* chore: more readme updates and remove screenshots - too much to maintain, just use the app!

* remove screeshot link
2023-12-07 14:48:27 -08:00
timothycarambat
fecfb0fafc chore: remove unused NO_DEBUG env 2023-12-07 14:14:30 -08:00
Timothy Carambat
88cdd8c872
Add built-in embedding engine into AnythingLLM (#411)
* Implement use of native embedder (all-Mini-L6-v2)
stop showing prisma queries during dev

* Add native embedder as an available embedder selection

* wrap model loader in try/catch

* print progress on download

* Update to progress output for embedder

* move embedder selection options to component

* forgot import

* add Data privacy alert updates for local embedder
2023-12-06 10:36:22 -08:00
Timothy Carambat
6fa8b0ce93
Add API key option to LocalAI (#407)
* Add API key option to LocalAI

* add api key for model dropdown selector
2023-12-04 08:38:15 -08:00
Sean Hatfield
73f342eb19
Warning about switching embedder or vectordb (#385)
* added warning modal to LLM preference

* added warning modal for changing embedder

* remove warning from LLM preference & add warning to vector database selection

* linting

* remove comments and move warning modal to component

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2023-11-16 14:35:14 -08:00
Tobias Landenberger
a96a9d41a3
LocalAI for embeddings (#361)
* feature: add localAi as embedding provider

* chore: add LocalAI image

* chore: add localai embedding examples to docker .env.example

* update setting env
pull models from localai API

* update comments on embedder
Dont show cost estimation on UI

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2023-11-14 13:49:31 -08:00
Timothy Carambat
4bb99ab4bf
Support LocalAi as LLM provider by @tlandenberger (#373)
* feature: add LocalAI as llm provider

* update Onboarding/mgmt settings
Grab models from models endpoint for localai
merge with master

* update streaming for complete chunk streaming
update localAI LLM to be able to stream

* force schema on URL

---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
Co-authored-by: tlandenberger <tobiaslandenberger@gmail.com>
2023-11-14 12:31:44 -08:00
Francisco Bischoff
f499f1ba59
Using OpenAI API locally (#335)
* Using OpenAI API locally

* Infinite prompt input and compression implementation (#332)

* WIP on continuous prompt window summary

* wip

* Move chat out of VDB
simplify chat interface
normalize LLM model interface
have compression abstraction
Cleanup compressor
TODO: Anthropic stuff

* Implement compression for Anythropic
Fix lancedb sources

* cleanup vectorDBs and check that lance, chroma, and pinecone are returning valid metadata sources

* Resolve Weaviate citation sources not working with schema

* comment cleanup

* disable import on hosted instances (#339)

* disable import on hosted instances

* Update UI on disabled import/export

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>

* Add support for gpt-4-turbo 128K model (#340)

resolves #336
Add support for gpt-4-turbo 128K model

* 315 show citations based on relevancy score (#316)

* settings for similarity score threshold and prisma schema updated

* prisma schema migration for adding similarityScore setting

* WIP

* Min score default change

* added similarityThreshold checking for all vectordb providers

* linting

---------

Co-authored-by: shatfield4 <seanhatfield5@gmail.com>

* rename localai to lmstudio

* forgot files that were renamed

* normalize model interface

* add model and context window limits

* update LMStudio tagline

* Fully working LMStudio integration

---------
Co-authored-by: Francisco Bischoff <984592+franzbischoff@users.noreply.github.com>
Co-authored-by: Timothy Carambat <rambat1010@gmail.com>
Co-authored-by: Sean Hatfield <seanhatfield5@gmail.com>
2023-11-09 12:33:21 -08:00