* Refactor api endpoint chat handler to its own function
remove legacy `chatWithWorkspace` and cleanup `index.js`
* Add `sessionId` in dev API to partition chats logically statelessly
* WIP slash presets
* WIP slash command customization CRUD + validations complete
* backend slash command support
* fix permission setting on new slash commands
rework form submit and pattern on frontend
* Add field updates for hooks,
required=true to field
add user<>command constraint to keep them unique
enforce uniquness via teritary uid field on table for multi and non-multi user
* reset migration
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* if document is pinned, do not give queryRefusalResponse message
* forgot embed.js patch
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* Enable per-workspace provider/model combination
* cleanup
* remove resetWorkspaceChatModels and wipeWorkspaceModelPreference to prevent workspace from resetting model
* add space
---------
Co-authored-by: shatfield4 <seanhatfield5@gmail.com>
* Enable ability to do full-text query on documents
Show alert modal on first pin for client
Add ability to use pins in stream/chat/embed
* typo and copy update
* simplify spread of context and sources
* refactor stream/chat/embed-stram to be a single execution logic path so that it is easier to maintain and build upon
* no thread in sync chat since only api uses it
adjust import locations
* WIP RLHF works on historical messages
* refactor Actions component
* completed RLHF up and down votes for chats
* add defaults for HistoricalMessage params
* refactor RLHF implmenation
remove forwardRef on history items to prevent rerenders
* remove dup id
* Add rating to CSV output
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* WIP embedded app
* WIP got response from backend in embedded app
* WIP streaming prints to embedded app
* implemented streaming and tailwind min for styling into embedded app
* WIP embedded app history functional
* load params from script tag into embedded app
* rough in modularization of embed chat
cleanup dev process for easier dev support
move all chat to components
todo: build process
todo: backend support
* remove eslint config
* Implement models and cleanup embed chat endpoints
Improve build process for embed
prod minification and bundle size awareness
WIP
* forgot files
* rename to embed folder
* introduce chat modal styles
* add middleware validations on embed chat
* auto open param and default greeting
* reset chat history
* Admin embed config page
* Admin Embed Chats mgmt page
* update embed
* nonpriv
* more style support
reopen if chat was last opened
* update comments
* remove unused imports
* allow change of workspace for embedconfig
* update failure to lookup message
* update reset script
* update instructions
* Add more styling options
Add sponsor text at bottom
Support dynamic container height
Loading animations
* publish new embed script
* Add back syntax highlighting and keep bundle small via dynamic script build
* add hint
* update readme
* update copy model for snippet with link to styles
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* create configurable topN per workspace
* Update TopN UI text
Fix fallbacks for all providers
Add SQLite CHECK to TOPN value
* merge with master
Update zilliz provider for variable TopN
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* add support for mistral api
* update docs to show support for Mistral
* add default temp to all providers, suggest different results per provider
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* WIP model selection per workspace (migrations and openai saves properly
* revert OpenAiOption
* add support for models per workspace for anthropic, localAi, ollama, openAi, and togetherAi
* remove unneeded comments
* update logic for when LLMProvider is reset, reset Ai provider files with master
* remove frontend/api reset of workspace chat and move logic to updateENV
add postUpdate callbacks to envs
* set preferred model for chat on class instantiation
* remove extra param
* linting
* remove unused var
* refactor chat model selection on workspace
* linting
* add fallback for base path to localai models
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* add Together AI LLM support
* update readme to support together ai
* Patch togetherAI implementation
* add model sorting/option labels by organization for model selection
* linting + add data handling for TogetherAI
* change truthy statement
patch validLLMSelection method
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* Implement use of native embedder (all-Mini-L6-v2)
stop showing prisma queries during dev
* Add native embedder as an available embedder selection
* wrap model loader in try/catch
* print progress on download
* add built-in LLM support (expiermental)
* Update to progress output for embedder
* move embedder selection options to component
* saftey checks for modelfile
* update ref
* Hide selection when on hosted subdomain
* update documentation
hide localLlama when on hosted
* saftey checks for storage of models
* update dockerfile to pre-build Llama.cpp bindings
* update lockfile
* add langchain doc comment
* remove extraneous --no-metal option
* Show data handling for private LLM
* persist model in memory for N+1 chats
* update import
update dev comment on token model size
* update primary README
* chore: more readme updates and remove screenshots - too much to maintain, just use the app!
* remove screeshot link
* feature: add LocalAI as llm provider
* update Onboarding/mgmt settings
Grab models from models endpoint for localai
merge with master
* update streaming for complete chunk streaming
update localAI LLM to be able to stream
* force schema on URL
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
Co-authored-by: tlandenberger <tobiaslandenberger@gmail.com>
* WIP on continuous prompt window summary
* wip
* Move chat out of VDB
simplify chat interface
normalize LLM model interface
have compression abstraction
Cleanup compressor
TODO: Anthropic stuff
* Implement compression for Anythropic
Fix lancedb sources
* cleanup vectorDBs and check that lance, chroma, and pinecone are returning valid metadata sources
* Resolve Weaviate citation sources not working with schema
* comment cleanup