* feat: add new model provider: Novita AI
* feat: finished novita AI
* fix: code lint
* remove unneeded logging
* add back log for novita stream not self closing
* Clarify ENV vars for LLM/embedder seperation for future
Patch ENV check for workspace/agent provider
---------
Co-authored-by: Jason <ggbbddjm@gmail.com>
Co-authored-by: shatfield4 <seanhatfield5@gmail.com>
* Issue #1943: Add support for LLM provider - Fireworks AI
* Update UI selection boxes
Update base AI keys for future embedder support if needed
Add agent capabilites for FireworksAI
* class only return
---------
Co-authored-by: Aaron Van Doren <vandoren96+1@gmail.com>
* initial commit for chrome extension
* wip browser extension backend
* wip frontend browser extension settings
* fix typo for browserExtension route
* implement verification codes + frontend panel for browser extension keys
* reorganize + state management for all connection states
* implement embed to workspace
* add send page to anythingllm extension option + refactor
* refactor connection string auth + update context menus + organize background.js into models
* popup extension from main app and save if successful
* fix hebrew translation misspelling
* fetch custom logo inside chrome extension
* delete api keys on disconnect of extension
* use correct apiUrl constant in frontend + remove unneeded comments
* remove upload-link endpoint and send inner text html to raw text collector endpoint
* update readme
* fix readme link
* fix readme typo
* update readme
* handle deletion of browser keys with key id and DELETE endpoint
* move event string to constant
* remove tablename and writable fields from BrowserExtensionApiKey backend model
* add border-none to all buttons and inputs for desktop compatibility
* patch prisma injections
* update delete endpoints to delete keys by id
* remove unused prop
* add button to attempt browser extension connection + remove max active keys
* wip multi user mode support
* multi user mode support
* clean up backend + show created by in frotend browser extension page
* show multi user warning message on key creation + hide context menus when no workspaces
* show browser extension options to managers
* small backend changes and refactors
* extension cleanup
* rename submodule
* extension updates & docs
* dev docker build
---------
Co-authored-by: shatfield4 <seanhatfield5@gmail.com>
* add text gen web ui LLM provider support
* update README
* README typo
* update TextWebUI display name
patch workspace<>model support for provider
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
[ 📝 ] Added new LLMs to supported LLMs list on README
- Added KoboldCPP to supported LLMs list
- Added Cohere to supported LLMs list
- Added Generic OpenAI to supported LLMs list
- Added Cohere to supported Embedding models list
* WIP openrouter integration
* add OpenRouter options to onboarding flow and data handling
* add todo to fix headers for rankings
* OpenRouter LLM support complete
* Fix hanging response stream with OpenRouter
update tagline
update comment
* update timeout comment
* wait for first chunk to start timer
* sort OpenRouter models by organization
* uppercase first letter of organization
* sort grouped models by org
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* add LLM support for perplexity
* update README & example env
* fix ENV keys in example env files
* slight changes for QA of perplexity support
* Update Perplexity AI name
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* WIP embedded app
* WIP got response from backend in embedded app
* WIP streaming prints to embedded app
* implemented streaming and tailwind min for styling into embedded app
* WIP embedded app history functional
* load params from script tag into embedded app
* rough in modularization of embed chat
cleanup dev process for easier dev support
move all chat to components
todo: build process
todo: backend support
* remove eslint config
* Implement models and cleanup embed chat endpoints
Improve build process for embed
prod minification and bundle size awareness
WIP
* forgot files
* rename to embed folder
* introduce chat modal styles
* add middleware validations on embed chat
* auto open param and default greeting
* reset chat history
* Admin embed config page
* Admin Embed Chats mgmt page
* update embed
* nonpriv
* more style support
reopen if chat was last opened
* update comments
* remove unused imports
* allow change of workspace for embedconfig
* update failure to lookup message
* update reset script
* update instructions
* Add more styling options
Add sponsor text at bottom
Support dynamic container height
Loading animations
* publish new embed script
* Add back syntax highlighting and keep bundle small via dynamic script build
* add hint
* update readme
* update copy model for snippet with link to styles
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* add support for mistral api
* update docs to show support for Mistral
* add default temp to all providers, suggest different results per provider
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* issue #543 support milvus vector db
* migrate Milvus to use MilvusClient instead of ORM
normalize env setup for docs/implementation
feat: embedder model dimension added
* update comments
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* add Together AI LLM support
* update readme to support together ai
* Patch togetherAI implementation
* add model sorting/option labels by organization for model selection
* linting + add data handling for TogetherAI
* change truthy statement
patch validLLMSelection method
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>