* issue #543 support milvus vector db
* migrate Milvus to use MilvusClient instead of ORM
normalize env setup for docs/implementation
feat: embedder model dimension added
* update comments
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* Dynamic vector count on workspace settings
Add count to be workspace specific, fallback to system count
Update layout of data in settings
Update OpenAI per-token embedding price
* linting
* add Together AI LLM support
* update readme to support together ai
* Patch togetherAI implementation
* add model sorting/option labels by organization for model selection
* linting + add data handling for TogetherAI
* change truthy statement
patch validLLMSelection method
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
Implement support for GitHub codespaces and VSCode devcontainers
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
Co-authored-by: Sean Hatfield <seanhatfield5@gmail.com>
* move llm, embedder, vectordb items to components folder
* add backdrop blur to search in llm, embedder, vectordb preferences
* implement searchable llm preference in settings
* implement searchable embedder in settings
* remove unused useState from embedder preferences
* implement searchable vector database in settings
* fix save changes button not appearing on change for llm, embedder, and vectordb settings pages
* sort selected items in all settings and put selected item at top of list
* no auto-top for selection
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* feat: implement github repo loading
fix: purge of folders
fix: rendering of sub-files
* noshow delete on custom-documents
* Add API key support because of rate limits
* WIP for frontend of data connectors
* wip
* Add frontend form for GitHub repo data connector
* remove console.logs
block custom-documents from being deleted
* remove _meta unused arg
* Add support for ignore pathing in request
Ignore path input via tagging
* Update hint
* feat: Embed on-instance Whisper model for audio/mp4 transcribing
resolves#329
* additional logging
* add placeholder for tmp folder in collector storage
Add cleanup of hotdir and tmp on collector boot to prevent hanging files
split loading of model and file conversion into concurrency
* update README
* update model size
* update supported filetypes
* Implement use of native embedder (all-Mini-L6-v2)
stop showing prisma queries during dev
* Add native embedder as an available embedder selection
* wrap model loader in try/catch
* print progress on download
* add built-in LLM support (expiermental)
* Update to progress output for embedder
* move embedder selection options to component
* saftey checks for modelfile
* update ref
* Hide selection when on hosted subdomain
* update documentation
hide localLlama when on hosted
* saftey checks for storage of models
* update dockerfile to pre-build Llama.cpp bindings
* update lockfile
* add langchain doc comment
* remove extraneous --no-metal option
* Show data handling for private LLM
* persist model in memory for N+1 chats
* update import
update dev comment on token model size
* update primary README
* chore: more readme updates and remove screenshots - too much to maintain, just use the app!
* remove screeshot link
* fix sizing of onboarding modals & lint
* fix extra scrolling on mobile onboarding flow
* added message to use desktop for onboarding
* linting
* add arrow to scroll to bottom (debounced) and fix chat scrolling to always scroll to very bottom on message history change
* fix for empty chat
* change mobile alert copy
* WIP adding PFP upload support
* WIP pfp for users
* edit account menu complete with change username/password and upload profile picture
* add pfp context to update all instances of usePfp hook on update
* linting
* add context for logo change to immediately update logo
* fix div with bullet points to use list-disc instead
* fix: small changes
* update multer file storage locations
* fix: use STORAGE_DIR for filepathing
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* Implement use of native embedder (all-Mini-L6-v2)
stop showing prisma queries during dev
* Add native embedder as an available embedder selection
* wrap model loader in try/catch
* print progress on download
* Update to progress output for embedder
* move embedder selection options to component
* forgot import
* add Data privacy alert updates for local embedder
* gear icon appear on hover for workspace
* put back user role check for default
* wrap in callback
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* fix sizing of onboarding modals & lint
* fix extra scrolling on mobile onboarding flow
* added message to use desktop for onboarding
* linting
* add arrow to scroll to bottom (debounced) and fix chat scrolling to always scroll to very bottom on message history change
* fix for empty chat
* change mobile alert copy
* fix div with bullet points to use list-disc instead
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* fix sizing of onboarding modals & lint
* fix extra scrolling on mobile onboarding flow
* added message to use desktop for onboarding
* linting
* change mobile alert copy
* fix div with bullet points to use list-disc instead
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* fix black text for custom messages text
* fix file upload icon being stretched
* center github, docs, and discord icons in sidebar
* fix chat container being cut off on right side and tighter spacing between message
* fix default chat container being cut off on right side
* on create new workspace, take user to the workspace they just created instead of the home page
* add border to chat container and click outside user menu to close
* fix borders around all chat and settings containers to be consistent
* fix padding for default messages
* fix spacing between workspace items in sidebar
* fix margin around right side of chat, default, and settings containers to be the same as the left sidebar
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* allow use of any embedder for any llm/update data handling modal
* Apply embedder override and fallback to OpenAI and Azure models
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* update NewUserModal to match ui styles
* fix bg color of invite screen/auto login after accept invitation
* fix error text color
* cleanup
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* feature: add LocalAI as llm provider
* update Onboarding/mgmt settings
Grab models from models endpoint for localai
merge with master
* update streaming for complete chunk streaming
update localAI LLM to be able to stream
* force schema on URL
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
Co-authored-by: tlandenberger <tobiaslandenberger@gmail.com>