Commit Graph

79 Commits

Author SHA1 Message Date
Timothy Carambat
c65f890afc
Add LMStudio embedding endpoint support (#1141)
* Add LMStudio embedding endpoint support

* update alive path check for HEAD
remove commented JSX

* update comment
2024-04-19 15:36:07 -07:00
Timothy Carambat
58b744771f
Add support for Gemini-1.5 Pro (#1134)
* Add support for Gemini-1.5 Pro
bump @google/generative-ai pkg
Toggle apiVersion if beta model selected
resolves #1109

* update API messages due to package change
2024-04-19 08:59:46 -07:00
Timothy Carambat
661563408a
Enable dynamic GPT model dropdown (#1111)
* Enable dynamic GPT model dropdown
2024-04-16 14:54:39 -07:00
Timothy Carambat
a5bb77f97a
Agent support for @agent default agent inside workspace chat (#1093)
V1 of agent support via built-in `@agent` that can be invoked alongside normal workspace RAG chat.
2024-04-16 10:50:10 -07:00
Timothy Carambat
d54e1c1f2d
expand support for non-US azure deployments (#1080)
* expand support for non-US azure deployments

* update conditional
2024-04-10 09:34:14 -07:00
Timothy Carambat
6f52a2b729
Embedder download - fallback URL (#1056)
* Embedder download - fallback URL

* improve logging for native embedder
2024-04-06 11:49:15 -07:00
Timothy Carambat
94b58249a3
Enable per-workspace provider/model combination (#1042)
* Enable per-workspace provider/model combination

* cleanup

* remove resetWorkspaceChatModels and wipeWorkspaceModelPreference to prevent workspace from resetting model

* add space

---------

Co-authored-by: shatfield4 <seanhatfield5@gmail.com>
2024-04-05 10:58:36 -07:00
Ivan Skodje
e9199bac12
[FEAT] Check port access in docker before showing a default error (#961)
* [FEAT] Added port checks in updateENV.validDockerizedUrl to prevent docker from assuming it cannot access localhost URLs

* [CHORE] Updated error message to include Linux URL

* Patch port checking for general loopbacks

* typo

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-04-02 10:34:50 -07:00
timothycarambat
bfedfebfab security: force sanitize env string set by user 2024-03-29 13:03:05 -07:00
Timothy Carambat
1135853740
Patch LMStudio Inference server bug integration (#957) 2024-03-22 14:39:30 -07:00
Timothy Carambat
7e7e957e32
Enable privacy and handling to be reviewed and modified (#910) 2024-03-14 16:56:15 -07:00
Timothy Carambat
0ada882991
Support external transcription providers (#909)
* Support External Transcription providers

* patch files

* update docs

* fix return data
2024-03-14 15:43:26 -07:00
Sean Hatfield
ac0e62d490
[FEAT] Anthropic Haiku model support (#901)
add Haiku model support
2024-03-13 17:32:02 -07:00
Timothy Carambat
0e46a11cb6
Stop generation button during stream-response (#892)
* Stop generation button during stream-response

* add custom stop icon

* add stop to thread chats
2024-03-12 15:21:27 -07:00
Sean Hatfield
e0d5d8039a
[FEAT] Claude 3 support and implement new version of Anthropic SDK (#863)
* implement new version of anthropic sdk and support new models

* remove handleAnthropicStream and move to handleStream inside anthropic provider

* update useGetProvidersModels for new anthropic models
2024-03-06 14:57:47 -08:00
Sean Hatfield
0634013788
[FEAT] Groq LLM support (#865)
* Groq LLM support complete

* update useGetProvidersModels for groq models

* Add definiations
update comments and error log reports
add example envs

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-03-06 14:48:38 -08:00
Sean Hatfield
fde905aac1
[FEAT] JSON export append all metadata fields to workspace chats (#845)
have JSON export append all metadata fields
2024-02-29 17:04:59 -08:00
Timothy Carambat
b64cb199f9
788 ollama embedder (#814)
* Add Ollama embedder model support calls

* update docs
2024-02-26 16:12:20 -08:00
Sean Hatfield
633f425206
[FEAT] OpenRouter integration (#784)
* WIP openrouter integration

* add OpenRouter options to onboarding flow and data handling

* add todo to fix headers for rankings

* OpenRouter LLM support complete

* Fix hanging response stream with OpenRouter
update tagline
update comment

* update timeout comment

* wait for first chunk to start timer

* sort OpenRouter models by organization

* uppercase first letter of organization

* sort grouped models by org

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-02-23 17:18:58 -08:00
Sean Hatfield
80ced5eba4
[FEAT] PerplexityAI Support (#778)
* add LLM support for perplexity

* update README & example env

* fix ENV keys in example env files

* slight changes for QA of perplexity support

* Update Perplexity AI name

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-02-22 12:48:57 -08:00
Timothy Carambat
791c0ee9dc
Enable ability to do full-text query on documents (#758)
* Enable ability to do full-text query on documents
Show alert modal on first pin for client
Add ability to use pins in stream/chat/embed

* typo and copy update

* simplify spread of context and sources
2024-02-21 13:15:45 -08:00
Timothy Carambat
32233974c2
Enable Alpaca JSON export format (#732)
* Enable Alpaca JSON export format

* Replace dom download link with filesave for browser compat
Fix layout of exported json types for readability
2024-02-16 12:35:53 -08:00
Timothy Carambat
c59ab9da0a
Refactor LLM chat backend (#717)
* refactor stream/chat/embed-stram to be a single execution logic path so that it is easier to maintain and build upon

* no thread in sync chat since only api uses it
adjust import locations
2024-02-14 12:32:07 -08:00
Sean Hatfield
f4b09a8c79
[FEAT] RLHF on response messages (#708)
* WIP RLHF works on historical messages

* refactor Actions component

* completed RLHF up and down votes for chats

* add defaults for HistoricalMessage params

* refactor RLHF implmenation
remove forwardRef on history items to prevent rerenders

* remove dup id

* Add rating to CSV output

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-02-13 11:33:05 -08:00
Sean Hatfield
1b29882c71
[FEAT] Improved CSV chat exports (#700)
* add more fields to csv export to make more useful

* refactor from review comments

* fix escapeCsv function

* catch export errors properly

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-02-13 10:12:59 -08:00
Sean Hatfield
d789920a19
[FEAT] Automated audit logging (#667)
* WIP event logging - new table for events and new settings view for viewing

* WIP add logging

* UI for log rows

* rename files to Logging to prevent getting gitignore

* add metadata for all logging events and colored badges in logs page

* remove unneeded comment

* cleanup namespace for logging

* clean up backend calls

* update logging to show to => from settings changes

* add logging for invitations, created, deleted, and accepted

* add logging for user created, updated, suspended, or removed

* add logging for workspace deleted

* add logging for chat logs exported

* add logging for API keys, LLM, embedder, vector db, embed chat, and reset button

* modify event logs

* update to event log types

* simplify rendering of event badges

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-02-06 15:21:40 -08:00
timothycarambat
5d64f26066 patch admin pwd update 2024-02-06 14:39:56 -08:00
Timothy Carambat
2bc11d3f1a
Implement support for HuggingFace Inference Endpoints (#680) 2024-02-06 09:17:51 -08:00
Hakeem Abbas
5614e2ed30
feature: Integrate Astra as vectorDBProvider (#648)
* feature: Integrate Astra as vectorDBProvider

feature: Integrate Astra as vectorDBProvider

* Update .env.example

* Add env.example to docker example file
Update spellcheck fo Astra
Update Astra key for vector selection
Update order of AstraDB options
Resize Astra logo image to 330x330
Update methods of Astra to take in latest vectorDB params like TopN and more
Update Astra interface to support default methods and avoid crash errors from 404 collections
Update Astra interface to comply to max chunk insertion limitations
Update Astra interface to dynamically set dimensionality from chunk 0 size on creation

* reset workspaces

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-01-26 13:07:53 -08:00
Sean Hatfield
2f3db0e63a
[FEAT] support pinecone serverless (#639)
* migrate pinecone package to latest version and migrate pinecone vectordb provider class

* remove pinecone environment name env variable and update docs to reflect removal & serverless support complete

* migrate query for pinecone db

* typo in log

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-01-22 16:41:20 -08:00
Timothy Carambat
9a237db3d1
Implement total permission overhaul (#629)
* Implement total permission overhaul
Add explicit permissions on each flex and strict route
Patch issues with role escalation and CRUD of users
Patch permissions on all routes for coverage
Improve middleware to accept role array for clarity

* update comments

* remove permissions to API-keys for manager. Manager could generate API-key and using high-privelege api-key give themselves admin

* update sidebar permissions for multi-user and single user

* update options for mobile sidebar
2024-01-22 14:14:01 -08:00
Timothy Carambat
44eb1e9ab0
617 persist special env keys (#624)
* add support for exporting to json and csv in workspace chats

* safety encode URL options

* remove message about openai fine tuning on export success

* all defaults to jsonl

* Persist special env keys on updates

---------

Co-authored-by: shatfield4 <seanhatfield5@gmail.com>
2024-01-18 18:13:24 -08:00
Timothy Carambat
0df86699e7
feat: Add support for Zilliz Cloud by Milvus (#615)
* feat: Add support for Zilliz Cloud by Milvus

* update placeholder text
update data handling stmt

* update zilliz descriptor
2024-01-17 18:00:54 -08:00
Sean Hatfield
3fe7a25759
add token context limit for native llm settings (#614)
Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-01-17 16:25:30 -08:00
Sean Hatfield
c2c8fe9756
add support for mistral api (#610)
* add support for mistral api

* update docs to show support for Mistral

* add default temp to all providers, suggest different results per provider

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-01-17 14:42:05 -08:00
Sean Hatfield
90df37582b
Per workspace model selection (#582)
* WIP model selection per workspace (migrations and openai saves properly

* revert OpenAiOption

* add support for models per workspace for anthropic, localAi, ollama, openAi, and togetherAi

* remove unneeded comments

* update logic for when LLMProvider is reset, reset Ai provider files with master

* remove frontend/api reset of workspace chat and move logic to updateENV
add postUpdate callbacks to envs

* set preferred model for chat on class instantiation

* remove extra param

* linting

* remove unused var

* refactor chat model selection on workspace

* linting

* add fallback for base path to localai models

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-01-17 12:59:25 -08:00
Shuyoou
6faa0efaa8
Issue #543 support milvus vector db (#579)
* issue #543 support milvus vector db

* migrate Milvus to use MilvusClient instead of ORM
normalize env setup for docs/implementation
feat: embedder model dimension added

* update comments

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-01-12 13:23:57 -08:00
Sean Hatfield
1d39b8a2ce
add Together AI LLM support (#560)
* add Together AI LLM support

* update readme to support together ai

* Patch togetherAI implementation

* add model sorting/option labels by organization for model selection

* linting + add data handling for TogetherAI

* change truthy statement
patch validLLMSelection method

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-01-10 12:35:30 -08:00
timothycarambat
3e088f22b1 fix: Patch tiktoken method missing
resolves #541
2024-01-05 09:39:19 -08:00
Timothy Carambat
92da23e963
Handle special token in TikToken (#528)
* Handle special token in TikToken
resolves #525

* remove duplicate method
add clarification comment on implementation
2024-01-04 15:47:00 -08:00
Timothy Carambat
ceadc8d467
patch gpt-4-turbo token allowance for Azure model (#514) 2024-01-02 12:49:48 -08:00
Timothy Carambat
d7481671ba
Prevent external service localhost question (#497)
* Prevent external service localhost question

* add 0.0.0.0 to docker-invalid URL

* clarify hint
2023-12-28 10:47:02 -08:00
Timothy Carambat
e0a0a8976d
Add Ollama as LLM provider option (#494)
* Add support for Ollama as LLM provider
resolves #493
2023-12-27 17:21:47 -08:00
Timothy Carambat
24227e48a7
Add LLM support for Google Gemini-Pro (#492)
resolves #489
2023-12-27 17:08:03 -08:00
timothycarambat
7bee849c65 chore: Force VectorCache to always be on;
update file picker spacing for attributes
2023-12-20 10:45:03 -08:00
Timothy Carambat
65c7c0a518
fix: patch api key not persisting when setting LLM/Embedder (#458) 2023-12-16 10:21:36 -08:00
Timothy Carambat
cba66150d7
patch: API key to localai service calls (#421)
connect #417
2023-12-11 14:18:28 -08:00
Timothy Carambat
8cc1455b72
feat: add support for variable chunk length (#415)
fix: cleanup code for embedding length clarify
resolves #388
2023-12-07 16:27:36 -08:00
Timothy Carambat
655ebd9479
[Feature] AnythingLLM use locally hosted Llama.cpp and GGUF files for inferencing (#413)
* Implement use of native embedder (all-Mini-L6-v2)
stop showing prisma queries during dev

* Add native embedder as an available embedder selection

* wrap model loader in try/catch

* print progress on download

* add built-in LLM support (expiermental)

* Update to progress output for embedder

* move embedder selection options to component

* saftey checks for modelfile

* update ref

* Hide selection when on hosted subdomain

* update documentation
hide localLlama when on hosted

* saftey checks for storage of models

* update dockerfile to pre-build Llama.cpp bindings

* update lockfile

* add langchain doc comment

* remove extraneous --no-metal option

* Show data handling for private LLM

* persist model in memory for N+1 chats

* update import
update dev comment on token model size

* update primary README

* chore: more readme updates and remove screenshots - too much to maintain, just use the app!

* remove screeshot link
2023-12-07 14:48:27 -08:00
Timothy Carambat
88cdd8c872
Add built-in embedding engine into AnythingLLM (#411)
* Implement use of native embedder (all-Mini-L6-v2)
stop showing prisma queries during dev

* Add native embedder as an available embedder selection

* wrap model loader in try/catch

* print progress on download

* Update to progress output for embedder

* move embedder selection options to component

* forgot import

* add Data privacy alert updates for local embedder
2023-12-06 10:36:22 -08:00