* feat: implement github repo loading
fix: purge of folders
fix: rendering of sub-files
* noshow delete on custom-documents
* Add API key support because of rate limits
* WIP for frontend of data connectors
* wip
* Add frontend form for GitHub repo data connector
* remove console.logs
block custom-documents from being deleted
* remove _meta unused arg
* Add support for ignore pathing in request
Ignore path input via tagging
* Update hint
* feat: Embed on-instance Whisper model for audio/mp4 transcribing
resolves#329
* additional logging
* add placeholder for tmp folder in collector storage
Add cleanup of hotdir and tmp on collector boot to prevent hanging files
split loading of model and file conversion into concurrency
* update README
* update model size
* update supported filetypes
* wip: init refactor of document processor to JS
* add NodeJs PDF support
* wip: partity with python processor
feat: add pptx support
* fix: forgot files
* Remove python scripts totally
* wip:update docker to boot new collector
* add package.json support
* update dockerfile for new build
* update gitignore and linting
* add more protections on file lookup
* update package.json
* test build
* update docker commands to use cap-add=SYS_ADMIN so web scraper can run
update all scripts to reflect this
remove docker build for branch
* Implement use of native embedder (all-Mini-L6-v2)
stop showing prisma queries during dev
* Add native embedder as an available embedder selection
* wrap model loader in try/catch
* print progress on download
* add built-in LLM support (expiermental)
* Update to progress output for embedder
* move embedder selection options to component
* saftey checks for modelfile
* update ref
* Hide selection when on hosted subdomain
* update documentation
hide localLlama when on hosted
* saftey checks for storage of models
* update dockerfile to pre-build Llama.cpp bindings
* update lockfile
* add langchain doc comment
* remove extraneous --no-metal option
* Show data handling for private LLM
* persist model in memory for N+1 chats
* update import
update dev comment on token model size
* update primary README
* chore: more readme updates and remove screenshots - too much to maintain, just use the app!
* remove screeshot link
* fix sizing of onboarding modals & lint
* fix extra scrolling on mobile onboarding flow
* added message to use desktop for onboarding
* linting
* add arrow to scroll to bottom (debounced) and fix chat scrolling to always scroll to very bottom on message history change
* fix for empty chat
* change mobile alert copy
* WIP adding PFP upload support
* WIP pfp for users
* edit account menu complete with change username/password and upload profile picture
* add pfp context to update all instances of usePfp hook on update
* linting
* add context for logo change to immediately update logo
* fix div with bullet points to use list-disc instead
* fix: small changes
* update multer file storage locations
* fix: use STORAGE_DIR for filepathing
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* Implement use of native embedder (all-Mini-L6-v2)
stop showing prisma queries during dev
* Add native embedder as an available embedder selection
* wrap model loader in try/catch
* print progress on download
* Update to progress output for embedder
* move embedder selection options to component
* forgot import
* add Data privacy alert updates for local embedder
* allow use of any embedder for any llm/update data handling modal
* Apply embedder override and fallback to OpenAI and Azure models
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* feature: add LocalAI as llm provider
* update Onboarding/mgmt settings
Grab models from models endpoint for localai
merge with master
* update streaming for complete chunk streaming
update localAI LLM to be able to stream
* force schema on URL
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
Co-authored-by: tlandenberger <tobiaslandenberger@gmail.com>
* added manager role to options
* block default role from editing workspace settings on workspace and text input box
* block default user from accessing settings at all
* create manager route
* let pass through if in single user mode
* fix permissions for manager and admin roles in settings
* fix settings button for single user and remove unneeded console.logs
* rename routes and paths for clarity
* admin, manager, default roles complete
* remove unneeded comments
* consistency changes
* manage permissions for mum modes
* update sidebar for single-user mode
* update comment on middleware
Modify permission setting for admins
* update render conditional
* Add role usage hint to each role
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* Using OpenAI API locally
* Infinite prompt input and compression implementation (#332)
* WIP on continuous prompt window summary
* wip
* Move chat out of VDB
simplify chat interface
normalize LLM model interface
have compression abstraction
Cleanup compressor
TODO: Anthropic stuff
* Implement compression for Anythropic
Fix lancedb sources
* cleanup vectorDBs and check that lance, chroma, and pinecone are returning valid metadata sources
* Resolve Weaviate citation sources not working with schema
* comment cleanup
* disable import on hosted instances (#339)
* disable import on hosted instances
* Update UI on disabled import/export
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* Add support for gpt-4-turbo 128K model (#340)
resolves#336
Add support for gpt-4-turbo 128K model
* 315 show citations based on relevancy score (#316)
* settings for similarity score threshold and prisma schema updated
* prisma schema migration for adding similarityScore setting
* WIP
* Min score default change
* added similarityThreshold checking for all vectordb providers
* linting
---------
Co-authored-by: shatfield4 <seanhatfield5@gmail.com>
* rename localai to lmstudio
* forgot files that were renamed
* normalize model interface
* add model and context window limits
* update LMStudio tagline
* Fully working LMStudio integration
---------
Co-authored-by: Francisco Bischoff <984592+franzbischoff@users.noreply.github.com>
Co-authored-by: Timothy Carambat <rambat1010@gmail.com>
Co-authored-by: Sean Hatfield <seanhatfield5@gmail.com>
* added JSONL export to workspace chats
* change permissions for workspace chat settings
* change permissions for workspace chat settings
* Show error for correct limit on fine-tune
Change sidebar position and permission
Remove check for MUM
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>