* Update build process to support multi-platform builds
Bump @lancedb/vectordb to 0.1.19 for ARM&AMD compatibility
Patch puppeteer on ARM builds because of broken chromium
resolves#539resolves#548
---------
Co-authored-by: shatfield4 <seanhatfield5@gmail.com>
* Added support for HTTPS to server.
* Move boot scripts to helper file
catch bad ssl boot config
fallback SSL boot to HTTP
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* wip: init refactor of document processor to JS
* add NodeJs PDF support
* wip: partity with python processor
feat: add pptx support
* fix: forgot files
* Remove python scripts totally
* wip:update docker to boot new collector
* add package.json support
* update dockerfile for new build
* update gitignore and linting
* add more protections on file lookup
* update package.json
* test build
* update docker commands to use cap-add=SYS_ADMIN so web scraper can run
update all scripts to reflect this
remove docker build for branch
* Implement use of native embedder (all-Mini-L6-v2)
stop showing prisma queries during dev
* Add native embedder as an available embedder selection
* wrap model loader in try/catch
* print progress on download
* add built-in LLM support (expiermental)
* Update to progress output for embedder
* move embedder selection options to component
* saftey checks for modelfile
* update ref
* Hide selection when on hosted subdomain
* update documentation
hide localLlama when on hosted
* saftey checks for storage of models
* update dockerfile to pre-build Llama.cpp bindings
* update lockfile
* add langchain doc comment
* remove extraneous --no-metal option
* Show data handling for private LLM
* persist model in memory for N+1 chats
* update import
update dev comment on token model size
* update primary README
* chore: more readme updates and remove screenshots - too much to maintain, just use the app!
* remove screeshot link
* feature: add LocalAI as llm provider
* update Onboarding/mgmt settings
Grab models from models endpoint for localai
merge with master
* update streaming for complete chunk streaming
update localAI LLM to be able to stream
* force schema on URL
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
Co-authored-by: tlandenberger <tobiaslandenberger@gmail.com>
* Using OpenAI API locally
* Infinite prompt input and compression implementation (#332)
* WIP on continuous prompt window summary
* wip
* Move chat out of VDB
simplify chat interface
normalize LLM model interface
have compression abstraction
Cleanup compressor
TODO: Anthropic stuff
* Implement compression for Anythropic
Fix lancedb sources
* cleanup vectorDBs and check that lance, chroma, and pinecone are returning valid metadata sources
* Resolve Weaviate citation sources not working with schema
* comment cleanup
* disable import on hosted instances (#339)
* disable import on hosted instances
* Update UI on disabled import/export
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* Add support for gpt-4-turbo 128K model (#340)
resolves#336
Add support for gpt-4-turbo 128K model
* 315 show citations based on relevancy score (#316)
* settings for similarity score threshold and prisma schema updated
* prisma schema migration for adding similarityScore setting
* WIP
* Min score default change
* added similarityThreshold checking for all vectordb providers
* linting
---------
Co-authored-by: shatfield4 <seanhatfield5@gmail.com>
* rename localai to lmstudio
* forgot files that were renamed
* normalize model interface
* add model and context window limits
* update LMStudio tagline
* Fully working LMStudio integration
---------
Co-authored-by: Francisco Bischoff <984592+franzbischoff@users.noreply.github.com>
Co-authored-by: Timothy Carambat <rambat1010@gmail.com>
Co-authored-by: Sean Hatfield <seanhatfield5@gmail.com>
* WIP Anythropic support for chat, chat and query w/context
* Add onboarding support for Anthropic
* cleanup
* fix Anthropic answer parsing
move embedding selector to general util
* WIP converted all sqlite models into prisma calls
* modify db setup and fix ApiKey model calls in admin.js
* renaming function params to be consistent
* converted adminEndpoints to utilize prisma orm
* converted chatEndpoints to utilize prisma orm
* converted inviteEndpoints to utilize prisma orm
* converted systemEndpoints to utilize prisma orm
* converted workspaceEndpoints to utilize prisma orm
* converting sql queries to prisma calls
* fixed default param bug for orderBy and limit
* fixed typo for workspace chats
* fixed order of deletion to account for sql relations
* fix invite CRUD and workspace management CRUD
* fixed CRUD for api keys
* created prisma setup scripts/docs for understanding how to use prisma
* prisma dependency change
* removing unneeded console.logs
* removing unneeded sql escape function
* linting and creating migration script
* migration from depreciated sqlite script update
* removing unneeded migrations in prisma folder
* create backup of old sqlite db and use transactions to ensure all operations complete successfully
* adding migrations to gitignore
* updated PRISMA.md docs for info on how to use sqlite migration script
* comment changes
* adding back migrations folder to repo
* Reviewing SQL and prisma integraiton on fresh repo
* update inline key replacement
* ensure migration script executes and maps foreign_keys regardless of db ordering
* run migration endpoint
* support new prisma backend
* bump version
* change migration call
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* Remove LangchainJS for chat support chaining
Implement runtime LLM selection
Implement AzureOpenAI Support for LLM + Emebedding
WIP on frontend
Update env to reflect the new fields
* Remove LangchainJS for chat support chaining
Implement runtime LLM selection
Implement AzureOpenAI Support for LLM + Emebedding
WIP on frontend
Update env to reflect the new fields
* Replace keys with LLM Selection in settings modal
Enforce checks for new ENVs depending on LLM selection
* implement dnd uploader
show file upload progress
write files to hotdirector
build simple flaskAPI to process files one off
* move document processor calls to util
build out dockerfile to run both procs at the same time
update UI to check for document processor before upload
* disable pragma update on boot
* dockerfile changes
* add filetype restrictions based on python app support response and show rejected files in the UI
* cleanup
* stub migrations on boot to prevent exit condition
* update CF template for AWS deploy
* feat: add argument for cpu arch in dockerfile
* feat: add argument for cpu arch in docker compose
* docs: add steps about cpu arch based docker build
* Updates for Linux for frontend/server
* frontend/server docker
* updated Dockerfile for deps related to node vectordb
* updates for collector in docker
* docker deps for ODT processing
* ignore another collector dir
* storage mount improvements; run as UID
* fix pypandoc version typo
* permissions fixes