* WIP embedded app
* WIP got response from backend in embedded app
* WIP streaming prints to embedded app
* implemented streaming and tailwind min for styling into embedded app
* WIP embedded app history functional
* load params from script tag into embedded app
* rough in modularization of embed chat
cleanup dev process for easier dev support
move all chat to components
todo: build process
todo: backend support
* remove eslint config
* Implement models and cleanup embed chat endpoints
Improve build process for embed
prod minification and bundle size awareness
WIP
* forgot files
* rename to embed folder
* introduce chat modal styles
* add middleware validations on embed chat
* auto open param and default greeting
* reset chat history
* Admin embed config page
* Admin Embed Chats mgmt page
* update embed
* nonpriv
* more style support
reopen if chat was last opened
* update comments
* remove unused imports
* allow change of workspace for embedconfig
* update failure to lookup message
* update reset script
* update instructions
* Add more styling options
Add sponsor text at bottom
Support dynamic container height
Loading animations
* publish new embed script
* Add back syntax highlighting and keep bundle small via dynamic script build
* add hint
* update readme
* update copy model for snippet with link to styles
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* feature: Integrate Astra as vectorDBProvider
feature: Integrate Astra as vectorDBProvider
* Update .env.example
* Add env.example to docker example file
Update spellcheck fo Astra
Update Astra key for vector selection
Update order of AstraDB options
Resize Astra logo image to 330x330
Update methods of Astra to take in latest vectorDB params like TopN and more
Update Astra interface to support default methods and avoid crash errors from 404 collections
Update Astra interface to comply to max chunk insertion limitations
Update Astra interface to dynamically set dimensionality from chunk 0 size on creation
* reset workspaces
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* migrate pinecone package to latest version and migrate pinecone vectordb provider class
* remove pinecone environment name env variable and update docs to reflect removal & serverless support complete
* migrate query for pinecone db
* typo in log
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* Implement total permission overhaul
Add explicit permissions on each flex and strict route
Patch issues with role escalation and CRUD of users
Patch permissions on all routes for coverage
Improve middleware to accept role array for clarity
* update comments
* remove permissions to API-keys for manager. Manager could generate API-key and using high-privelege api-key give themselves admin
* update sidebar permissions for multi-user and single user
* update options for mobile sidebar
* create configurable topN per workspace
* Update TopN UI text
Fix fallbacks for all providers
Add SQLite CHECK to TOPN value
* merge with master
Update zilliz provider for variable TopN
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* add support for mistral api
* update docs to show support for Mistral
* add default temp to all providers, suggest different results per provider
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* WIP model selection per workspace (migrations and openai saves properly
* revert OpenAiOption
* add support for models per workspace for anthropic, localAi, ollama, openAi, and togetherAi
* remove unneeded comments
* update logic for when LLMProvider is reset, reset Ai provider files with master
* remove frontend/api reset of workspace chat and move logic to updateENV
add postUpdate callbacks to envs
* set preferred model for chat on class instantiation
* remove extra param
* linting
* remove unused var
* refactor chat model selection on workspace
* linting
* add fallback for base path to localai models
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* issue #543 support milvus vector db
* migrate Milvus to use MilvusClient instead of ORM
normalize env setup for docs/implementation
feat: embedder model dimension added
* update comments
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* add Together AI LLM support
* update readme to support together ai
* Patch togetherAI implementation
* add model sorting/option labels by organization for model selection
* linting + add data handling for TogetherAI
* change truthy statement
patch validLLMSelection method
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* Implement use of native embedder (all-Mini-L6-v2)
stop showing prisma queries during dev
* Add native embedder as an available embedder selection
* wrap model loader in try/catch
* print progress on download
* add built-in LLM support (expiermental)
* Update to progress output for embedder
* move embedder selection options to component
* saftey checks for modelfile
* update ref
* Hide selection when on hosted subdomain
* update documentation
hide localLlama when on hosted
* saftey checks for storage of models
* update dockerfile to pre-build Llama.cpp bindings
* update lockfile
* add langchain doc comment
* remove extraneous --no-metal option
* Show data handling for private LLM
* persist model in memory for N+1 chats
* update import
update dev comment on token model size
* update primary README
* chore: more readme updates and remove screenshots - too much to maintain, just use the app!
* remove screeshot link
* Implement use of native embedder (all-Mini-L6-v2)
stop showing prisma queries during dev
* Add native embedder as an available embedder selection
* wrap model loader in try/catch
* print progress on download
* Update to progress output for embedder
* move embedder selection options to component
* forgot import
* add Data privacy alert updates for local embedder
* feature: add LocalAI as llm provider
* update Onboarding/mgmt settings
Grab models from models endpoint for localai
merge with master
* update streaming for complete chunk streaming
update localAI LLM to be able to stream
* force schema on URL
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
Co-authored-by: tlandenberger <tobiaslandenberger@gmail.com>
* added manager role to options
* block default role from editing workspace settings on workspace and text input box
* block default user from accessing settings at all
* create manager route
* let pass through if in single user mode
* fix permissions for manager and admin roles in settings
* fix settings button for single user and remove unneeded console.logs
* rename routes and paths for clarity
* admin, manager, default roles complete
* remove unneeded comments
* consistency changes
* manage permissions for mum modes
* update sidebar for single-user mode
* update comment on middleware
Modify permission setting for admins
* update render conditional
* Add role usage hint to each role
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* Using OpenAI API locally
* Infinite prompt input and compression implementation (#332)
* WIP on continuous prompt window summary
* wip
* Move chat out of VDB
simplify chat interface
normalize LLM model interface
have compression abstraction
Cleanup compressor
TODO: Anthropic stuff
* Implement compression for Anythropic
Fix lancedb sources
* cleanup vectorDBs and check that lance, chroma, and pinecone are returning valid metadata sources
* Resolve Weaviate citation sources not working with schema
* comment cleanup
* disable import on hosted instances (#339)
* disable import on hosted instances
* Update UI on disabled import/export
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* Add support for gpt-4-turbo 128K model (#340)
resolves#336
Add support for gpt-4-turbo 128K model
* 315 show citations based on relevancy score (#316)
* settings for similarity score threshold and prisma schema updated
* prisma schema migration for adding similarityScore setting
* WIP
* Min score default change
* added similarityThreshold checking for all vectordb providers
* linting
---------
Co-authored-by: shatfield4 <seanhatfield5@gmail.com>
* rename localai to lmstudio
* forgot files that were renamed
* normalize model interface
* add model and context window limits
* update LMStudio tagline
* Fully working LMStudio integration
---------
Co-authored-by: Francisco Bischoff <984592+franzbischoff@users.noreply.github.com>
Co-authored-by: Timothy Carambat <rambat1010@gmail.com>
Co-authored-by: Sean Hatfield <seanhatfield5@gmail.com>
* added JSONL export to workspace chats
* change permissions for workspace chat settings
* change permissions for workspace chat settings
* Show error for correct limit on fine-tune
Change sidebar position and permission
Remove check for MUM
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* WIP on continuous prompt window summary
* wip
* Move chat out of VDB
simplify chat interface
normalize LLM model interface
have compression abstraction
Cleanup compressor
TODO: Anthropic stuff
* Implement compression for Anythropic
Fix lancedb sources
* cleanup vectorDBs and check that lance, chroma, and pinecone are returning valid metadata sources
* Resolve Weaviate citation sources not working with schema
* comment cleanup
* WIP Anythropic support for chat, chat and query w/context
* Add onboarding support for Anthropic
* cleanup
* fix Anthropic answer parsing
move embedding selector to general util
Limit is due to POST body max size. Sufficiently large requests will abort automatically
We should report that error back on the frontend during embedding
Update vectordb providers to return on failed
* WIP converted all sqlite models into prisma calls
* modify db setup and fix ApiKey model calls in admin.js
* renaming function params to be consistent
* converted adminEndpoints to utilize prisma orm
* converted chatEndpoints to utilize prisma orm
* converted inviteEndpoints to utilize prisma orm
* converted systemEndpoints to utilize prisma orm
* converted workspaceEndpoints to utilize prisma orm
* converting sql queries to prisma calls
* fixed default param bug for orderBy and limit
* fixed typo for workspace chats
* fixed order of deletion to account for sql relations
* fix invite CRUD and workspace management CRUD
* fixed CRUD for api keys
* created prisma setup scripts/docs for understanding how to use prisma
* prisma dependency change
* removing unneeded console.logs
* removing unneeded sql escape function
* linting and creating migration script
* migration from depreciated sqlite script update
* removing unneeded migrations in prisma folder
* create backup of old sqlite db and use transactions to ensure all operations complete successfully
* adding migrations to gitignore
* updated PRISMA.md docs for info on how to use sqlite migration script
* comment changes
* adding back migrations folder to repo
* Reviewing SQL and prisma integraiton on fresh repo
* update inline key replacement
* ensure migration script executes and maps foreign_keys regardless of db ordering
* run migration endpoint
* support new prisma backend
* bump version
* change migration call
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* Autodocument Swagger API with JSDocs on /v1/ endpoints for API access
implement single-player API keys
WIP Admin API Keys
* Create new api keys as both single and multi-user
* Add boot and telem
* Complete Admin API
* Complete endpoints
dark mode swagger
* update docs
* undo debug
* update docs and readme
* added ui for custom welcome messages and added label for custom logo in admin settings
* linting
* fixing img to use light/dark modes
* converted ChatBubble into component
* implemented backend for welcome messages and admin appearance page
* completed custom welcome messages for admin
* finished custom messages for single user mode
* merged with master and linted
* improved UI for appearance settings pages
* linted and merged with master
* small updates
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* implemented logo customization for single-user mode
* removing unneeded comments
* added dark and light mode support for default logo
* implemented dark and light mode switching in frontend
* fixed dark and light mode switching for failed to load logo from backend
* removed unneeded comment
* custom logos for admin implemented
* refactor logo mgmt functions
abstract logo management utils into their own file for simplicity
* added settings tab for appearance on single-user mode
* unchecking files with unneeded changes
* fixed appearance settings tab to be hidden on multiuser mode
* allow readall for logo
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* multi user wip
* WIP MUM features
* invitation mgmt
* suspend or unsuspend users
* workspace mangement
* manage chats
* manage chats
* add Support for admin system settings for users to delete workspaces and limit chats per user
* fix issue ith system var
update app to lazy load invite page
* cleanup and bug fixes
* wrong method
* update readme
* update readme
* update readme
* bump version to 0.1.0
* Related to Issue #122, Implemented custom prompt in workspace settings.
* run linter
* Remove code duplication for chat prompt injection
---------
Co-authored-by: Francisco Bischoff <franzbischoff@gmail.com>
* Add chat/converstaion mode as the default chat mode
Show menu for toggling options for chat/query/reset command
Show chat status below input
resolves#61
* remove console logs
* 1. Define LLM Temperature as a workspace setting
2. Implement rudimentry table migration code for both new and existing repos to bring tables up to date
3. Trigger for workspace on update to update timestamp
4. Always fallback temp to 0.7
5. Extract WorkspaceModal into Tabbed content
6. Remove workspace name UNIQUE constraint (cannot be migrated :()
7. Add slug +seed when existing slug is already take
8. Seperate name from slug so display names can be changed
* remove blocking test return
* Updates for Linux for frontend/server
* frontend/server docker
* updated Dockerfile for deps related to node vectordb
* updates for collector in docker
* docker deps for ODT processing
* ignore another collector dir
* storage mount improvements; run as UID
* fix pypandoc version typo
* permissions fixes