Commit Graph

170 Commits

Author SHA1 Message Date
Sean Hatfield
90df37582b
Per workspace model selection (#582)
* WIP model selection per workspace (migrations and openai saves properly

* revert OpenAiOption

* add support for models per workspace for anthropic, localAi, ollama, openAi, and togetherAi

* remove unneeded comments

* update logic for when LLMProvider is reset, reset Ai provider files with master

* remove frontend/api reset of workspace chat and move logic to updateENV
add postUpdate callbacks to envs

* set preferred model for chat on class instantiation

* remove extra param

* linting

* remove unused var

* refactor chat model selection on workspace

* linting

* add fallback for base path to localai models

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-01-17 12:59:25 -08:00
Sean Hatfield
bf503ee0e9
add check to skip empty messages (#602)
* add check to skip empty messages

* add comment explaining prisma + sqlite not supporting createMany()
2024-01-16 18:23:51 -08:00
Timothy Carambat
b35feede87
570 document api return object (#608)
* Add support for fetching single document in documents folder

* Add document object to upload + support link scraping via API

* hotfixes for documentation

* update api docs
2024-01-16 16:04:22 -08:00
Timothy Carambat
c61cbd1502
Add support for fetching single document in documents folder (#607) 2024-01-16 14:58:49 -08:00
Timothy Carambat
d0a3f1e3e1
Fix present diminsions on vectorDBs to be inferred for providers who require it (#605) 2024-01-16 13:41:01 -08:00
Timothy Carambat
f5bb064dee
Implement streaming for workspace chats via API (#604) 2024-01-16 10:37:46 -08:00
Timothy Carambat
bd158ce7b1
[Feat] Query mode to return no-result when no context found (#601)
* Query mode to return no-result when no context found

* update default error for sync chat

* remove unnecessary type conversion
2024-01-16 09:32:51 -08:00
timothycarambat
7aaa4b38e7 add flex role to export endpoint 2024-01-14 17:10:49 -08:00
timothycarambat
e1dcd5ded0 Normalize pfp path to prevent traversal 2024-01-14 16:53:44 -08:00
Timothy Carambat
026849df02
normalize paths for submit URLs of `remove-documents (#598)
normalize paths for submit URLs
2024-01-14 16:36:17 -08:00
Timothy Carambat
4f6d93159f
improve native embedder handling of large files (#584)
* improve native embedder handling of large files

* perf changes

* ignore storage tmp
2024-01-13 00:32:43 -08:00
Shuyoou
6faa0efaa8
Issue #543 support milvus vector db (#579)
* issue #543 support milvus vector db

* migrate Milvus to use MilvusClient instead of ORM
normalize env setup for docs/implementation
feat: embedder model dimension added

* update comments

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-01-12 13:23:57 -08:00
Timothy Carambat
7200a06ef0
prevent manager in multi-user from updatingENV via HTTP (#576)
* prevent manager in multi-user from updatingENV via HTTP

* remove unneeded args
2024-01-11 12:11:45 -08:00
Timothy Carambat
3c859ba303
Change pwd check to O(1) check to prevent timing attacks - single user mode (#575)
Change pwd check to O(1) check to prevent timing attacks
2024-01-11 10:54:55 -08:00
timothycarambat
dfd03e332c patch stream response 2024-01-10 15:32:07 -08:00
Timothy Carambat
4e2c0f04b4
Dynamic vector count on workspace settings (#567)
* Dynamic vector count on workspace settings
Add count to be workspace specific, fallback to system count
Update layout of data in settings
Update OpenAI per-token embedding price

* linting
2024-01-10 13:18:48 -08:00
Sean Hatfield
1d39b8a2ce
add Together AI LLM support (#560)
* add Together AI LLM support

* update readme to support together ai

* Patch togetherAI implementation

* add model sorting/option labels by organization for model selection

* linting + add data handling for TogetherAI

* change truthy statement
patch validLLMSelection method

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-01-10 12:35:30 -08:00
Timothy Carambat
8cd3a92c66
[BUG] Fixed mass_assignment vuln (#566)
Fixed mass_assignment vuln

Co-authored-by: dastaj <78434825+dastaj@users.noreply.github.com>
2024-01-10 08:42:03 -08:00
Timothy Carambat
755c10b8ca
[API] patch swagger host to be relative (#563)
patch swagger host to be relative
2024-01-09 19:49:51 -08:00
Timothy Carambat
58971e8b30
Build & Publish AnythingLLM for ARM64 and x86 (#549)
* Update build process to support multi-platform builds
Bump @lancedb/vectordb to 0.1.19 for ARM&AMD compatibility
Patch puppeteer on ARM builds because of broken chromium
resolves #539
resolves #548

---------

Co-authored-by: shatfield4 <seanhatfield5@gmail.com>
2024-01-08 16:15:01 -08:00
Timothy Carambat
cce48c2163
[Automated Builds] Patch lockfiles with new devdeps (#553)
patch lockfiles with new devdeps
2024-01-08 15:48:03 -08:00
Francisco Bischoff
990a2e85bf
devcontainer v1 (#297)
Implement support for GitHub codespaces and VSCode devcontainers
---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
Co-authored-by: Sean Hatfield <seanhatfield5@gmail.com>
2024-01-08 15:31:06 -08:00
timothycarambat
19e3889f29 update envs to not display https localhost 2024-01-06 15:38:44 -08:00
timothycarambat
3e088f22b1 fix: Patch tiktoken method missing
resolves #541
2024-01-05 09:39:19 -08:00
Timothy Carambat
e9f7b9b79e
Handle undefined stream chunk for native LLM (#534) 2024-01-04 18:05:06 -08:00
pritchey
74d2711d80
523-Added support for HTTPS to Server. (#524)
* Added support for HTTPS to server.

* Move boot scripts to helper file
catch bad ssl boot config
fallback SSL boot to HTTP

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-01-04 17:22:15 -08:00
Sayan Gupta
b7d2756754
Issue #204 Added a check to ensure that 'chunk.payload' exists and contains the 'id' property (#526)
* Issue #204 Added a check to ensure that 'chunk.payload' exists and contains the 'id' property before attempting to destructure it

* run linter

* simplify condition and comment

---------

Co-authored-by: timothycarambat <rambat1010@gmail.com>
2024-01-04 16:39:43 -08:00
Timothy Carambat
d8ca92df88
Onboarding V2 (#502)
* WIP onboarding v2

* Welcome screen for onboarding complete

* fix home page and WIP create skeleton for llm preference search/options

* render llms as options

* add search functionality to llm preference & add survey step

* fix openai settings undefined & create custom logo onboarding page

* add user setup UI

* add data handling & privacy onboarding screen

* add create workspace onboarding screen

* fix survey width in onboarding

* create vector database connection onboarding page

* add workspace image & all skeleton ui complete

* fix navigation buttons and ui tweaks to fit on screen

* WIP LLMPreference

* LLM Preference screen fully functional

* create components for vector db options and fix styling of azure options

* remove unneeded comment

* vector db connection onboarding screen complete

* minor ui tweak to searchbar

* user setup page fully working

* create workspace onboarding page fully working

* useNavigate for navigation between pages

* mobile layout, cleanup old files, survey functionality implemented

* fix default logo appearing when should be blank & password setup bug fix

* Modify flow of onboarding
todo: embedding set up

* Add embedder setup screen & insert into flow

* update embedding back button
auto-dismiss toasts on each step

* move page defs under imports
fix bg color on mobile styling

---------

Co-authored-by: shatfield4 <seanhatfield5@gmail.com>
2024-01-04 15:54:31 -08:00
Timothy Carambat
92da23e963
Handle special token in TikToken (#528)
* Handle special token in TikToken
resolves #525

* remove duplicate method
add clarification comment on implementation
2024-01-04 15:47:00 -08:00
Timothy Carambat
75dd86967c
Implement AzureOpenAI model chat streaming (#518)
resolves #492
2024-01-03 16:25:39 -08:00
Timothy Carambat
ceadc8d467
patch gpt-4-turbo token allowance for Azure model (#514) 2024-01-02 12:49:48 -08:00
timothycarambat
237c544ebc Merge branch 'master' of github.com:Mintplex-Labs/anything-llm 2024-01-02 12:44:26 -08:00
timothycarambat
d99b87e7d8 patch workspace-chats API endpoint to be generally available instead of forced multi-user 2024-01-02 12:44:17 -08:00
Timothy Carambat
6d5968bf7e
Llm chore cleanup (#501)
* move internal functions to private in class
simplify lc message convertor

* Fix hanging Context text when none is present
2023-12-28 14:42:34 -08:00
Timothy Carambat
2a1202de54
Patch Ollama Streaming chunk issues (#500)
Replace stream/sync chats with Langchain interface for now
connect #499
ref: https://github.com/Mintplex-Labs/anything-llm/issues/495#issuecomment-1871476091
2023-12-28 13:59:47 -08:00
Timothy Carambat
d7481671ba
Prevent external service localhost question (#497)
* Prevent external service localhost question

* add 0.0.0.0 to docker-invalid URL

* clarify hint
2023-12-28 10:47:02 -08:00
Timothy Carambat
e0a0a8976d
Add Ollama as LLM provider option (#494)
* Add support for Ollama as LLM provider
resolves #493
2023-12-27 17:21:47 -08:00
Timothy Carambat
24227e48a7
Add LLM support for Google Gemini-Pro (#492)
resolves #489
2023-12-27 17:08:03 -08:00
Sean Hatfield
1d9ba76b92
fix success is not defined error (#484) 2023-12-21 10:31:14 -08:00
timothycarambat
049bfa14cb fix: fully separate chunkconcurrency from chunk length 2023-12-20 11:20:40 -08:00
timothycarambat
7bee849c65 chore: Force VectorCache to always be on;
update file picker spacing for attributes
2023-12-20 10:45:03 -08:00
timothycarambat
a7f6003277 fix: set lower maxChunk limit on native embedder to stay within resource constraints
chore: update comment for what embedding chunk means
2023-12-19 16:20:34 -08:00
Timothy Carambat
ecf4295537
Add ability to grab youtube transcripts via doc processor (#470)
* Add ability to grab youtube transcripts via doc processor

* dynamic imports
swap out Github for Youtube in placeholder text
2023-12-18 17:17:26 -08:00
Timothy Carambat
452582489e
GitHub loader extension + extension support v1 (#469)
* feat: implement github repo loading
fix: purge of folders
fix: rendering of sub-files

* noshow delete on custom-documents

* Add API key support because of rate limits

* WIP for frontend of data connectors

* wip

* Add frontend form for GitHub repo data connector

* remove console.logs
block custom-documents from being deleted

* remove _meta unused arg

* Add support for ignore pathing in request
Ignore path input via tagging

* Update hint
2023-12-18 15:48:02 -08:00
Timothy Carambat
65c7c0a518
fix: patch api key not persisting when setting LLM/Embedder (#458) 2023-12-16 10:21:36 -08:00
Timothy Carambat
61db981017
feat: Embed on-instance Whisper model for audio/mp4 transcribing (#449)
* feat: Embed on-instance Whisper model for audio/mp4 transcribing
resolves #329

* additional logging

* add placeholder for tmp folder in collector storage
Add cleanup of hotdir and tmp on collector boot to prevent hanging files
split loading of model and file conversion into concurrency

* update README

* update model size

* update supported filetypes
2023-12-15 11:20:13 -08:00
Timothy Carambat
719521c307
Document Processor v2 (#442)
* wip: init refactor of document processor to JS

* add NodeJs PDF support

* wip: partity with python processor
feat: add pptx support

* fix: forgot files

* Remove python scripts totally

* wip:update docker to boot new collector

* add package.json support

* update dockerfile for new build

* update gitignore and linting

* add more protections on file lookup

* update package.json

* test build

* update docker commands to use cap-add=SYS_ADMIN so web scraper can run
update all scripts to reflect this
remove docker build for branch
2023-12-14 15:14:56 -08:00
timothycarambat
5f6a013139 Change server bootup log 2023-12-14 13:52:11 -08:00
Timothy Carambat
1e98da07bc
docs: placeholder for model downloads folder (#446) 2023-12-14 10:31:14 -08:00
Timothy Carambat
37cdb845a4
patch: implement @lunamidori hotfix for LocalAI streaming chunk overflows (#433)
* patch: implement @lunamidori hotfix for LocalAI streaming chunk overflows
resolves #416

* change log to error log

* log trace

* lint
2023-12-12 16:20:06 -08:00