* add LMStudio agent support (generic) support
"work" with non-tool callable LLMs, highly dependent on system specs
* add comments
* enable few-shot prompting per function for OSS models
* Add Agent support for Ollama models
* azure, groq, koboldcpp agent support complete + WIP togetherai
* WIP gemini agent support
* WIP gemini blocked and will not fix for now
* azure fix
* merge fix
* add localai agent support
* azure untooled agent support
* merge fix
* refactor implementation of several agent provideers
* update bad merge comment
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* WIP openrouter integration
* add OpenRouter options to onboarding flow and data handling
* add todo to fix headers for rankings
* OpenRouter LLM support complete
* Fix hanging response stream with OpenRouter
update tagline
update comment
* update timeout comment
* wait for first chunk to start timer
* sort OpenRouter models by organization
* uppercase first letter of organization
* sort grouped models by org
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* add LLM support for perplexity
* update README & example env
* fix ENV keys in example env files
* slight changes for QA of perplexity support
* Update Perplexity AI name
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* add support for mistral api
* update docs to show support for Mistral
* add default temp to all providers, suggest different results per provider
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* WIP model selection per workspace (migrations and openai saves properly
* revert OpenAiOption
* add support for models per workspace for anthropic, localAi, ollama, openAi, and togetherAi
* remove unneeded comments
* update logic for when LLMProvider is reset, reset Ai provider files with master
* remove frontend/api reset of workspace chat and move logic to updateENV
add postUpdate callbacks to envs
* set preferred model for chat on class instantiation
* remove extra param
* linting
* remove unused var
* refactor chat model selection on workspace
* linting
* add fallback for base path to localai models
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* add Together AI LLM support
* update readme to support together ai
* Patch togetherAI implementation
* add model sorting/option labels by organization for model selection
* linting + add data handling for TogetherAI
* change truthy statement
patch validLLMSelection method
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
* Implement use of native embedder (all-Mini-L6-v2)
stop showing prisma queries during dev
* Add native embedder as an available embedder selection
* wrap model loader in try/catch
* print progress on download
* add built-in LLM support (expiermental)
* Update to progress output for embedder
* move embedder selection options to component
* saftey checks for modelfile
* update ref
* Hide selection when on hosted subdomain
* update documentation
hide localLlama when on hosted
* saftey checks for storage of models
* update dockerfile to pre-build Llama.cpp bindings
* update lockfile
* add langchain doc comment
* remove extraneous --no-metal option
* Show data handling for private LLM
* persist model in memory for N+1 chats
* update import
update dev comment on token model size
* update primary README
* chore: more readme updates and remove screenshots - too much to maintain, just use the app!
* remove screeshot link
* feature: add LocalAI as llm provider
* update Onboarding/mgmt settings
Grab models from models endpoint for localai
merge with master
* update streaming for complete chunk streaming
update localAI LLM to be able to stream
* force schema on URL
---------
Co-authored-by: timothycarambat <rambat1010@gmail.com>
Co-authored-by: tlandenberger <tobiaslandenberger@gmail.com>