* implement custom icon on login screen for single & multi user + custom app name feature
* hide field when not relevant
* set customApp name
* show original anythingllm login logo unless custom logo is set
* nit-picks
* remove console log
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* add support for voyageai embedder
* remove unneeded import
* linting
* Add ENV examples
Update how chunks are processed for Voyage
use correct langchain import
Add data handling
---------
Co-authored-by: Timothy Carambat <rambat1010@gmail.com>
* Support SQL Agent skill
* add MSSQL agent connector
* Add frontend to agent skills
remove FAKE_DB mock
reset skills to pickup child-skill dynamically
* add prompt examples for tools on untooled
* add better logging on SQL agents
* Wipe toolruns on each chat relay so tools can be used within the same session
* update comments
* add text gen web ui LLM provider support
* update README
* README typo
* update TextWebUI display name
patch workspace<>model support for provider
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* getChatCompletion working WIP streaming
* WIP
* working streaming WIP abort stream
* implement cohere embedder support
* remove inputType option from cohere embedder
* fix cohere LLM from not aborting stream when canceled by user
* Patch Cohere implemention
* add cohere to onboarding
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* Enable per-workspace provider/model combination
* cleanup
* remove resetWorkspaceChatModels and wipeWorkspaceModelPreference to prevent workspace from resetting model
* add space
---------
Co-authored-by: shatfield4 <seanhatfield5@gmail.com>
* WIP openrouter integration
* add OpenRouter options to onboarding flow and data handling
* add todo to fix headers for rankings
* OpenRouter LLM support complete
* Fix hanging response stream with OpenRouter
update tagline
update comment
* update timeout comment
* wait for first chunk to start timer
* sort OpenRouter models by organization
* uppercase first letter of organization
* sort grouped models by org
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* add LLM support for perplexity
* update README & example env
* fix ENV keys in example env files
* slight changes for QA of perplexity support
* Update Perplexity AI name
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* WIP custom footer icons
* UI for updating footer icons complete and backend to save/modify
* add backend for unprotected footer fetch
* break out footer into separate component and render footer items using a cache for 1 hour
* wip review
* refactor & cleanup
* Optimize footer form component
Optimize caching for footer icons
Add validation on SystemSetting upserts
Normalize fallback items for footer_data
* Adjust max icons to 3
* fix success message on remove
* fix success message on remove
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* feature: Integrate Astra as vectorDBProvider
feature: Integrate Astra as vectorDBProvider
* Update .env.example
* Add env.example to docker example file
Update spellcheck fo Astra
Update Astra key for vector selection
Update order of AstraDB options
Resize Astra logo image to 330x330
Update methods of Astra to take in latest vectorDB params like TopN and more
Update Astra interface to support default methods and avoid crash errors from 404 collections
Update Astra interface to comply to max chunk insertion limitations
Update Astra interface to dynamically set dimensionality from chunk 0 size on creation
* reset workspaces
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* migrate pinecone package to latest version and migrate pinecone vectordb provider class
* remove pinecone environment name env variable and update docs to reflect removal & serverless support complete
* migrate query for pinecone db
* typo in log
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* add support for mistral api
* update docs to show support for Mistral
* add default temp to all providers, suggest different results per provider
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* issue #543 support milvus vector db
* migrate Milvus to use MilvusClient instead of ORM
normalize env setup for docs/implementation
feat: embedder model dimension added
* update comments
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* add Together AI LLM support
* update readme to support together ai
* Patch togetherAI implementation
* add model sorting/option labels by organization for model selection
* linting + add data handling for TogetherAI
* change truthy statement
patch validLLMSelection method
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* Implement use of native embedder (all-Mini-L6-v2)
stop showing prisma queries during dev
* Add native embedder as an available embedder selection
* wrap model loader in try/catch
* print progress on download
* add built-in LLM support (expiermental)
* Update to progress output for embedder
* move embedder selection options to component
* saftey checks for modelfile
* update ref
* Hide selection when on hosted subdomain
* update documentation
hide localLlama when on hosted
* saftey checks for storage of models
* update dockerfile to pre-build Llama.cpp bindings
* update lockfile
* add langchain doc comment
* remove extraneous --no-metal option
* Show data handling for private LLM
* persist model in memory for N+1 chats
* update import
update dev comment on token model size
* update primary README
* chore: more readme updates and remove screenshots - too much to maintain, just use the app!
* remove screeshot link
* Implement use of native embedder (all-Mini-L6-v2)
stop showing prisma queries during dev
* Add native embedder as an available embedder selection
* wrap model loader in try/catch
* print progress on download
* Update to progress output for embedder
* move embedder selection options to component
* forgot import
* add Data privacy alert updates for local embedder