Commit Graph

85 Commits

Author SHA1 Message Date
Timothy Carambat
b6be43be95
Add Speech-to-text and Text-to-speech providers (#1394)
* Add Speech-to-text and Text-to-speech providers

* add files and update comment

* update comments

* patch: bad playerRef check
2024-05-14 11:57:21 -07:00
Timothy Carambat
64b62290d7
Set gpt-4o as default for OpenAI (#1391) 2024-05-13 14:31:49 -07:00
Sean Hatfield
9ed2309757
[FEAT] Add API key support for Oobabooga Web UI (#1354)
* add api key support for oobabooga web ui

* dont expose API Key for TextWebGenUi

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-05-13 12:58:16 -07:00
Sean Hatfield
977a07db86
[FEAT] Text Generation Web UI LLM provider support (#1279)
* add text gen web ui LLM provider support

* update README

* README typo

* update TextWebUI display name
patch workspace<>model support for provider

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-05-08 11:56:30 -07:00
Sean Hatfield
fc77b46800
[FEAT] KoboldCPP LLM Support (#1268)
* koboldcpp LLM support

* update .env.examples for koboldcpp support

* update LLM preference order
update koboldcpp comments

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-05-02 12:12:44 -07:00
Sean Hatfield
3caebc47b4
[FEAT] Cohere LLM and embedder support (#1233)
* getChatCompletion working WIP streaming

* WIP

* working streaming WIP abort stream

* implement cohere embedder support

* remove inputType option from cohere embedder

* fix cohere LLM from not aborting stream when canceled by user

* Patch Cohere implemention

* add cohere to onboarding

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-05-02 10:35:50 -07:00
Timothy Carambat
547d4859ef
Bump openai package to latest (#1234)
* Bump `openai` package to latest
Tested all except localai

* bump LocalAI support with latest image

* add deprecation notice

* linting
2024-04-30 12:33:42 -07:00
Timothy Carambat
df17fbda36
Add generic OpenAI endpoint support (#1178)
* Add generic OpenAI endpoint support

* allow any input for model in case provider does not support models endpoint
2024-04-23 13:06:07 -07:00
Timothy Carambat
c65f890afc
Add LMStudio embedding endpoint support (#1141)
* Add LMStudio embedding endpoint support

* update alive path check for HEAD
remove commented JSX

* update comment
2024-04-19 15:36:07 -07:00
Timothy Carambat
a5bb77f97a
Agent support for @agent default agent inside workspace chat (#1093)
V1 of agent support via built-in `@agent` that can be invoked alongside normal workspace RAG chat.
2024-04-16 10:50:10 -07:00
timothycarambat
9f55f44efb update docs 2024-03-25 17:44:16 -07:00
Luna Midori
ed9c6b4470
Update HOW_TO_USE_DOCKER.md (#958) 2024-03-25 17:42:52 -07:00
Timothy Carambat
1135853740
Patch LMStudio Inference server bug integration (#957) 2024-03-22 14:39:30 -07:00
timothycarambat
568e78d3ef update Docker readme 2024-03-19 09:04:33 -07:00
Timothy Carambat
0ada882991
Support external transcription providers (#909)
* Support External Transcription providers

* patch files

* update docs

* fix return data
2024-03-14 15:43:26 -07:00
Francis Mercier
c472598bfc
HOW_TO_USE_DOCKER.md - Missing -ItemType File for .env creation (#879)
Update HOW_TO_USE_DOCKER.md

Missing New-Item -ItemType for .env default to a folder when it should be a file (Docker won't run when that happen since a file is expected)
2024-03-08 16:40:27 -08:00
Francisco Bischoff
9ce3d1150d
Update Ubuntu base image and improve Dockerfile (#609)
* Update Ubuntu base image and improve Dockerfile

* Add unzip to Docker image dependencies

Needed for the arm64 build

* reset tabs

* formalized lint rules for hadolint. however the Docker formatting is being handled by MS Docker extension which doesn't indent code as expected. WIP.

* found a workaround to keep formatting

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-03-06 16:34:45 -08:00
Sean Hatfield
0634013788
[FEAT] Groq LLM support (#865)
* Groq LLM support complete

* update useGetProvidersModels for groq models

* Add definiations
update comments and error log reports
add example envs

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-03-06 14:48:38 -08:00
Timothy Carambat
b64cb199f9
788 ollama embedder (#814)
* Add Ollama embedder model support calls

* update docs
2024-02-26 16:12:20 -08:00
Sean Hatfield
633f425206
[FEAT] OpenRouter integration (#784)
* WIP openrouter integration

* add OpenRouter options to onboarding flow and data handling

* add todo to fix headers for rankings

* OpenRouter LLM support complete

* Fix hanging response stream with OpenRouter
update tagline
update comment

* update timeout comment

* wait for first chunk to start timer

* sort OpenRouter models by organization

* uppercase first letter of organization

* sort grouped models by org

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-02-23 17:18:58 -08:00
Sean Hatfield
80ced5eba4
[FEAT] PerplexityAI Support (#778)
* add LLM support for perplexity

* update README & example env

* fix ENV keys in example env files

* slight changes for QA of perplexity support

* Update Perplexity AI name

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-02-22 12:48:57 -08:00
timothycarambat
f376872e43 patch Puppeteer base URL 2024-02-22 10:59:45 -08:00
Sean Hatfield
e99c74aec1
[DOCS] Update Docker documentation to show how to setup Ollama with Dockerized version of AnythingLLM (#774)
* update HOW_TO_USE_DOCKER to help with Ollama setup using docker

* update HOW_TO_USE_DOCKER

* styles update

* create separate README for ollama and link to it in HOW_TO_USE_DOCKER

* styling update
2024-02-21 18:42:32 -08:00
Timothy Carambat
b636a07e52
Update Docker Run command (#756)
* Update Docker Run command

* Update Docker Run command

* Update Docker Run command
2024-02-19 10:29:47 -08:00
timothycarambat
7c29b37844 show telem disable 2024-02-12 09:05:30 -08:00
Timothy Carambat
2bc11d3f1a
Implement support for HuggingFace Inference Endpoints (#680) 2024-02-06 09:17:51 -08:00
Sean Hatfield
9d41ff58e2
[FEAT] add support for new openai embedding models (#653)
* add support for new openai models

* QOL changes/improve logic for adding new openai embedding models

* add example file inputs for Openai embedding ENV selection;

* Fix if stmt conditional

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-01-29 08:48:27 -08:00
Hakeem Abbas
5614e2ed30
feature: Integrate Astra as vectorDBProvider (#648)
* feature: Integrate Astra as vectorDBProvider

feature: Integrate Astra as vectorDBProvider

* Update .env.example

* Add env.example to docker example file
Update spellcheck fo Astra
Update Astra key for vector selection
Update order of AstraDB options
Resize Astra logo image to 330x330
Update methods of Astra to take in latest vectorDB params like TopN and more
Update Astra interface to support default methods and avoid crash errors from 404 collections
Update Astra interface to comply to max chunk insertion limitations
Update Astra interface to dynamically set dimensionality from chunk 0 size on creation

* reset workspaces

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-01-26 13:07:53 -08:00
Sean Hatfield
2f3db0e63a
[FEAT] support pinecone serverless (#639)
* migrate pinecone package to latest version and migrate pinecone vectordb provider class

* remove pinecone environment name env variable and update docs to reflect removal & serverless support complete

* migrate query for pinecone db

* typo in log

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-01-22 16:41:20 -08:00
Timothy Carambat
0df86699e7
feat: Add support for Zilliz Cloud by Milvus (#615)
* feat: Add support for Zilliz Cloud by Milvus

* update placeholder text
update data handling stmt

* update zilliz descriptor
2024-01-17 18:00:54 -08:00
Sean Hatfield
c2c8fe9756
add support for mistral api (#610)
* add support for mistral api

* update docs to show support for Mistral

* add default temp to all providers, suggest different results per provider

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-01-17 14:42:05 -08:00
Shuyoou
6faa0efaa8
Issue #543 support milvus vector db (#579)
* issue #543 support milvus vector db

* migrate Milvus to use MilvusClient instead of ORM
normalize env setup for docs/implementation
feat: embedder model dimension added

* update comments

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-01-12 13:23:57 -08:00
Sean Hatfield
1d39b8a2ce
add Together AI LLM support (#560)
* add Together AI LLM support

* update readme to support together ai

* Patch togetherAI implementation

* add model sorting/option labels by organization for model selection

* linting + add data handling for TogetherAI

* change truthy statement
patch validLLMSelection method

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-01-10 12:35:30 -08:00
Timothy Carambat
58971e8b30
Build & Publish AnythingLLM for ARM64 and x86 (#549)
* Update build process to support multi-platform builds
Bump @lancedb/vectordb to 0.1.19 for ARM&AMD compatibility
Patch puppeteer on ARM builds because of broken chromium
resolves #539
resolves #548

---------

Co-authored-by: shatfield4 <seanhatfield5@gmail.com>
2024-01-08 16:15:01 -08:00
timothycarambat
19e3889f29 update envs to not display https localhost 2024-01-06 15:38:44 -08:00
pritchey
74d2711d80
523-Added support for HTTPS to Server. (#524)
* Added support for HTTPS to server.

* Move boot scripts to helper file
catch bad ssl boot config
fallback SSL boot to HTTP

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-01-04 17:22:15 -08:00
Timothy Carambat
d7481671ba
Prevent external service localhost question (#497)
* Prevent external service localhost question

* add 0.0.0.0 to docker-invalid URL

* clarify hint
2023-12-28 10:47:02 -08:00
Timothy Carambat
e0a0a8976d
Add Ollama as LLM provider option (#494)
* Add support for Ollama as LLM provider
resolves #493
2023-12-27 17:21:47 -08:00
Timothy Carambat
24227e48a7
Add LLM support for Google Gemini-Pro (#492)
resolves #489
2023-12-27 17:08:03 -08:00
timothycarambat
31ff4f0832 docs: chain windows docker commands for single enter run 2023-12-20 11:40:04 -08:00
Timothy Carambat
c613eff31c
[Docker] Windows Docker command in Powershell (#480)
* wip

* side by side test

* patch syntax highlighting

* remove spacing formatting

* swap powershell command
2023-12-20 11:33:00 -08:00
timothycarambat
7bee849c65 chore: Force VectorCache to always be on;
update file picker spacing for attributes
2023-12-20 10:45:03 -08:00
Timothy Carambat
f1fc2d90fa
Readme updates (#475)
* wip

* btn links

* button updates

* update dev instructions

* typo
2023-12-19 13:24:50 -08:00
timothycarambat
2def9a67a1 docs: update default storage path to just write to user home dir for docker instructions 2023-12-17 15:48:56 -08:00
lunamidori5
ecfd146891
Update Dockerfile (added ffmpeg) (#451) 2023-12-15 19:35:38 -08:00
Timothy Carambat
719521c307
Document Processor v2 (#442)
* wip: init refactor of document processor to JS

* add NodeJs PDF support

* wip: partity with python processor
feat: add pptx support

* fix: forgot files

* Remove python scripts totally

* wip:update docker to boot new collector

* add package.json support

* update dockerfile for new build

* update gitignore and linting

* add more protections on file lookup

* update package.json

* test build

* update docker commands to use cap-add=SYS_ADMIN so web scraper can run
update all scripts to reflect this
remove docker build for branch
2023-12-14 15:14:56 -08:00
Timothy Carambat
cc3343ba79
add Docker internal URL to READMEs (#441)
add Docker internal URL
2023-12-13 11:59:14 -08:00
Timothy Carambat
8cc1455b72
feat: add support for variable chunk length (#415)
fix: cleanup code for embedding length clarify
resolves #388
2023-12-07 16:27:36 -08:00
timothycarambat
48dd99b028 chore: update docker boot script for storage location export 2023-12-07 15:11:48 -08:00
Timothy Carambat
655ebd9479
[Feature] AnythingLLM use locally hosted Llama.cpp and GGUF files for inferencing (#413)
* Implement use of native embedder (all-Mini-L6-v2)
stop showing prisma queries during dev

* Add native embedder as an available embedder selection

* wrap model loader in try/catch

* print progress on download

* add built-in LLM support (expiermental)

* Update to progress output for embedder

* move embedder selection options to component

* saftey checks for modelfile

* update ref

* Hide selection when on hosted subdomain

* update documentation
hide localLlama when on hosted

* saftey checks for storage of models

* update dockerfile to pre-build Llama.cpp bindings

* update lockfile

* add langchain doc comment

* remove extraneous --no-metal option

* Show data handling for private LLM

* persist model in memory for N+1 chats

* update import
update dev comment on token model size

* update primary README

* chore: more readme updates and remove screenshots - too much to maintain, just use the app!

* remove screeshot link
2023-12-07 14:48:27 -08:00