* feat: add new model provider: Novita AI
* feat: finished novita AI
* fix: code lint
* remove unneeded logging
* add back log for novita stream not self closing
* Clarify ENV vars for LLM/embedder seperation for future
Patch ENV check for workspace/agent provider
---------
Co-authored-by: Jason <ggbbddjm@gmail.com>
Co-authored-by: shatfield4 <seanhatfield5@gmail.com>
* feat: Embed on-instance Whisper model for audio/mp4 transcribing
resolves#329
* additional logging
* add placeholder for tmp folder in collector storage
Add cleanup of hotdir and tmp on collector boot to prevent hanging files
split loading of model and file conversion into concurrency
* update README
* update model size
* update supported filetypes
* Implement use of native embedder (all-Mini-L6-v2)
stop showing prisma queries during dev
* Add native embedder as an available embedder selection
* wrap model loader in try/catch
* print progress on download
* add built-in LLM support (expiermental)
* Update to progress output for embedder
* move embedder selection options to component
* saftey checks for modelfile
* update ref
* Hide selection when on hosted subdomain
* update documentation
hide localLlama when on hosted
* saftey checks for storage of models
* update dockerfile to pre-build Llama.cpp bindings
* update lockfile
* add langchain doc comment
* remove extraneous --no-metal option
* Show data handling for private LLM
* persist model in memory for N+1 chats
* update import
update dev comment on token model size
* update primary README
* chore: more readme updates and remove screenshots - too much to maintain, just use the app!
* remove screeshot link
* Implement use of native embedder (all-Mini-L6-v2)
stop showing prisma queries during dev
* Add native embedder as an available embedder selection
* wrap model loader in try/catch
* print progress on download
* Update to progress output for embedder
* move embedder selection options to component
* forgot import
* add Data privacy alert updates for local embedder