* Update OpenAI TTS config to allow a custom BaseURL
* uncheck config file
* break openai generic TTS into its own provider
* add space
* hide TTS on user msg
---------
Co-authored-by: Adam <phazei@gmail.com>
* set message limit per user
* remove old limit user messages + unused admin page
* fix daily message validation
* refactor message limit input
refactor canSendChat on user to a method on user model
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* Issue #1943: Add support for LLM provider - Fireworks AI
* Update UI selection boxes
Update base AI keys for future embedder support if needed
Add agent capabilites for FireworksAI
* class only return
---------
Co-authored-by: Aaron Van Doren <vandoren96+1@gmail.com>
* wip bg workers for live document sync
* Add ability to re-embed specific documents across many workspaces via background queue
bgworkser is gated behind expieremental system setting flag that needs to be explictly enabled
UI for watching/unwatching docments that are embedded.
TODO: UI to easily manage all bg tasks and see run results
TODO: UI to enable this feature and background endpoints to manage it
* create frontend views and paths
Move elements to correct experimental scope
* update migration to delete runs on removal of watched document
* Add watch support to YouTube transcripts (#1716)
* Add watch support to YouTube transcripts
refactor how sync is done for supported types
* Watch specific files in Confluence space (#1718)
Add failure-prune check for runs
* create tmp workflow modifications for beta image
* create tmp workflow modifications for beta image
* create tmp workflow modifications for beta image
* dual build
update copy of alert modals
* update job interval
* Add support for live-sync of Github files
* update copy for document sync feature
* hide Experimental features from UI
* update docs links
* [FEAT] Implement new settings menu for experimental features (#1735)
* implement new settings menu for experimental features
* remove unused context save bar
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* dont run job on boot
* unset workflow changes
* Add persistent encryption service
Relay key to collector so persistent encryption can be used
Encrypt any private data in chunkSources used for replay during resync jobs
* update jsDOC
* Linting and organization
* update modal copy for feature
---------
Co-authored-by: Sean Hatfield <seanhatfield5@gmail.com>
* implement custom icon on login screen for single & multi user + custom app name feature
* hide field when not relevant
* set customApp name
* show original anythingllm login logo unless custom logo is set
* nit-picks
* remove console log
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* add support for voyageai embedder
* remove unneeded import
* linting
* Add ENV examples
Update how chunks are processed for Voyage
use correct langchain import
Add data handling
---------
Co-authored-by: Timothy Carambat <rambat1010@gmail.com>
* Support SQL Agent skill
* add MSSQL agent connector
* Add frontend to agent skills
remove FAKE_DB mock
reset skills to pickup child-skill dynamically
* add prompt examples for tools on untooled
* add better logging on SQL agents
* Wipe toolruns on each chat relay so tools can be used within the same session
* update comments
* add text gen web ui LLM provider support
* update README
* README typo
* update TextWebUI display name
patch workspace<>model support for provider
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* getChatCompletion working WIP streaming
* WIP
* working streaming WIP abort stream
* implement cohere embedder support
* remove inputType option from cohere embedder
* fix cohere LLM from not aborting stream when canceled by user
* Patch Cohere implemention
* add cohere to onboarding
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* Enable per-workspace provider/model combination
* cleanup
* remove resetWorkspaceChatModels and wipeWorkspaceModelPreference to prevent workspace from resetting model
* add space
---------
Co-authored-by: shatfield4 <seanhatfield5@gmail.com>
* WIP openrouter integration
* add OpenRouter options to onboarding flow and data handling
* add todo to fix headers for rankings
* OpenRouter LLM support complete
* Fix hanging response stream with OpenRouter
update tagline
update comment
* update timeout comment
* wait for first chunk to start timer
* sort OpenRouter models by organization
* uppercase first letter of organization
* sort grouped models by org
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* add LLM support for perplexity
* update README & example env
* fix ENV keys in example env files
* slight changes for QA of perplexity support
* Update Perplexity AI name
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>