* add support for exporting to json and csv in workspace chats
* safety encode URL options
* remove message about openai fine tuning on export success
* all defaults to jsonl
* Persist special env keys on updates
---------
Co-authored-by: shatfield4 <seanhatfield5@gmail.com>
* create configurable topN per workspace
* Update TopN UI text
Fix fallbacks for all providers
Add SQLite CHECK to TOPN value
* merge with master
Update zilliz provider for variable TopN
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* add support for mistral api
* update docs to show support for Mistral
* add default temp to all providers, suggest different results per provider
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* WIP model selection per workspace (migrations and openai saves properly
* revert OpenAiOption
* add support for models per workspace for anthropic, localAi, ollama, openAi, and togetherAi
* remove unneeded comments
* update logic for when LLMProvider is reset, reset Ai provider files with master
* remove frontend/api reset of workspace chat and move logic to updateENV
add postUpdate callbacks to envs
* set preferred model for chat on class instantiation
* remove extra param
* linting
* remove unused var
* refactor chat model selection on workspace
* linting
* add fallback for base path to localai models
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* Add support for fetching single document in documents folder
* Add document object to upload + support link scraping via API
* hotfixes for documentation
* update api docs
* issue #543 support milvus vector db
* migrate Milvus to use MilvusClient instead of ORM
normalize env setup for docs/implementation
feat: embedder model dimension added
* update comments
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* add Together AI LLM support
* update readme to support together ai
* Patch togetherAI implementation
* add model sorting/option labels by organization for model selection
* linting + add data handling for TogetherAI
* change truthy statement
patch validLLMSelection method
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* Added support for HTTPS to server.
* Move boot scripts to helper file
catch bad ssl boot config
fallback SSL boot to HTTP
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* Issue #204 Added a check to ensure that 'chunk.payload' exists and contains the 'id' property before attempting to destructure it
* run linter
* simplify condition and comment
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* feat: implement github repo loading
fix: purge of folders
fix: rendering of sub-files
* noshow delete on custom-documents
* Add API key support because of rate limits
* WIP for frontend of data connectors
* wip
* Add frontend form for GitHub repo data connector
* remove console.logs
block custom-documents from being deleted
* remove _meta unused arg
* Add support for ignore pathing in request
Ignore path input via tagging
* Update hint
* wip: init refactor of document processor to JS
* add NodeJs PDF support
* wip: partity with python processor
feat: add pptx support
* fix: forgot files
* Remove python scripts totally
* wip:update docker to boot new collector
* add package.json support
* update dockerfile for new build
* update gitignore and linting
* add more protections on file lookup
* update package.json
* test build
* update docker commands to use cap-add=SYS_ADMIN so web scraper can run
update all scripts to reflect this
remove docker build for branch
* Implement use of native embedder (all-Mini-L6-v2)
stop showing prisma queries during dev
* Add native embedder as an available embedder selection
* wrap model loader in try/catch
* print progress on download
* add built-in LLM support (expiermental)
* Update to progress output for embedder
* move embedder selection options to component
* saftey checks for modelfile
* update ref
* Hide selection when on hosted subdomain
* update documentation
hide localLlama when on hosted
* saftey checks for storage of models
* update dockerfile to pre-build Llama.cpp bindings
* update lockfile
* add langchain doc comment
* remove extraneous --no-metal option
* Show data handling for private LLM
* persist model in memory for N+1 chats
* update import
update dev comment on token model size
* update primary README
* chore: more readme updates and remove screenshots - too much to maintain, just use the app!
* remove screeshot link
* fix sizing of onboarding modals & lint
* fix extra scrolling on mobile onboarding flow
* added message to use desktop for onboarding
* linting
* add arrow to scroll to bottom (debounced) and fix chat scrolling to always scroll to very bottom on message history change
* fix for empty chat
* change mobile alert copy
* WIP adding PFP upload support
* WIP pfp for users
* edit account menu complete with change username/password and upload profile picture
* add pfp context to update all instances of usePfp hook on update
* linting
* add context for logo change to immediately update logo
* fix div with bullet points to use list-disc instead
* fix: small changes
* update multer file storage locations
* fix: use STORAGE_DIR for filepathing
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* Implement use of native embedder (all-Mini-L6-v2)
stop showing prisma queries during dev
* Add native embedder as an available embedder selection
* wrap model loader in try/catch
* print progress on download
* Update to progress output for embedder
* move embedder selection options to component
* forgot import
* add Data privacy alert updates for local embedder