From 655ebd94796acc3b252f1cb6d3b0be68ca81666f Mon Sep 17 00:00:00 2001
From: Timothy Carambat
Date: Thu, 7 Dec 2023 14:48:27 -0800
Subject: [PATCH] [Feature] AnythingLLM use locally hosted Llama.cpp and GGUF
files for inferencing (#413)
* Implement use of native embedder (all-Mini-L6-v2)
stop showing prisma queries during dev
* Add native embedder as an available embedder selection
* wrap model loader in try/catch
* print progress on download
* add built-in LLM support (expiermental)
* Update to progress output for embedder
* move embedder selection options to component
* saftey checks for modelfile
* update ref
* Hide selection when on hosted subdomain
* update documentation
hide localLlama when on hosted
* saftey checks for storage of models
* update dockerfile to pre-build Llama.cpp bindings
* update lockfile
* add langchain doc comment
* remove extraneous --no-metal option
* Show data handling for private LLM
* persist model in memory for N+1 chats
* update import
update dev comment on token model size
* update primary README
* chore: more readme updates and remove screenshots - too much to maintain, just use the app!
* remove screeshot link
---
README.md | 28 +-
docker/Dockerfile | 7 +-
.../LLMSelection/NativeLLMOptions/index.jsx | 84 ++
.../src/components/PrivateRoute/index.jsx | 10 +-
.../GeneralSettings/LLMPreference/index.jsx | 15 +
.../Steps/DataHandling/index.jsx | 7 +
.../Steps/LLMSelection/index.jsx | 11 +
images/screenshots/SCREENSHOTS.md | 18 -
images/screenshots/document.png | Bin 539571 -> 0 bytes
images/screenshots/home.png | Bin 608899 -> 0 bytes
images/screenshots/llm_selection.png | Bin 531359 -> 0 bytes
images/screenshots/uploading_doc.gif | Bin 3569725 -> 0 bytes
images/screenshots/vector_databases.png | Bin 595692 -> 0 bytes
server/models/systemSettings.js | 9 +-
server/package.json | 5 +-
server/storage/models/README.md | 28 +-
server/utils/AiProviders/native/index.js | 196 ++++
server/utils/chats/stream.js | 30 +
server/utils/helpers/customModels.js | 24 +-
server/utils/helpers/index.js | 3 +
server/utils/helpers/updateENV.js | 33 +-
server/yarn.lock | 895 +++++++++++++++++-
22 files changed, 1304 insertions(+), 99 deletions(-)
create mode 100644 frontend/src/components/LLMSelection/NativeLLMOptions/index.jsx
delete mode 100644 images/screenshots/SCREENSHOTS.md
delete mode 100644 images/screenshots/document.png
delete mode 100644 images/screenshots/home.png
delete mode 100644 images/screenshots/llm_selection.png
delete mode 100644 images/screenshots/uploading_doc.gif
delete mode 100644 images/screenshots/vector_databases.png
create mode 100644 server/utils/AiProviders/native/index.js
diff --git a/README.md b/README.md
index 379fb4da..f08368e8 100644
--- a/README.md
+++ b/README.md
@@ -3,7 +3,7 @@
- AnythingLLM: A document chatbot to chat with anything!.
+ AnythingLLM: A private ChatGPT to chat with anything!.
An efficient, customizable, and open-source enterprise-ready document chatbot solution.
@@ -22,10 +22,9 @@
-A full-stack application that enables you to turn any document, resource, or piece of content into context that any LLM can use as references during chatting. This application allows you to pick and choose which LLM or Vector Database you want to use.
+A full-stack application that enables you to turn any document, resource, or piece of content into context that any LLM can use as references during chatting. This application allows you to pick and choose which LLM or Vector Database you want to use as well as supporting multi-user management and permissions.
![Chatting](/images/screenshots/chatting.gif)
-[view more screenshots](/images/screenshots/SCREENSHOTS.md)
### Watch the demo!
@@ -33,17 +32,16 @@ A full-stack application that enables you to turn any document, resource, or pie
### Product Overview
-AnythingLLM aims to be a full-stack application where you can use commercial off-the-shelf LLMs or popular open source LLMs and vectorDB solutions.
-
-Anything LLM is a full-stack product that you can run locally as well as host remotely and be able to chat intelligently with any documents you provide it.
+AnythingLLM is a full-stack application where you can use commercial off-the-shelf LLMs or popular open source LLMs and vectorDB solutions to build a private ChatGPT with no compromises that you can run locally as well as host remotely and be able to chat intelligently with any documents you provide it.
AnythingLLM divides your documents into objects called `workspaces`. A Workspace functions a lot like a thread, but with the addition of containerization of your documents. Workspaces can share documents, but they do not talk to each other so you can keep your context for each workspace clean.
Some cool features of AnythingLLM
- **Multi-user instance support and permissioning**
-- Atomically manage documents in your vector database from a simple UI
+- Multiple document type support (PDF, TXT, DOCX, etc)
+- Manage documents in your vector database from a simple UI
- Two chat modes `conversation` and `query`. Conversation retains previous questions and amendments. Query is simple QA against your documents
-- Each chat response contains a citation that is linked to the original document source
+- In-chat citations linked to the original document source and text
- Simple technology stack for fast iteration
- 100% Cloud deployment ready.
- "Bring your own LLM" model.
@@ -52,6 +50,7 @@ Some cool features of AnythingLLM
### Supported LLMs, Embedders, and Vector Databases
**Supported LLMs:**
+- [Any open-source llama.cpp compatible model](/server/storage/models/README.md#text-generation-llm-selection)
- [OpenAI](https://openai.com)
- [Azure OpenAI](https://azure.microsoft.com/en-us/products/ai-services/openai-service)
- [Anthropic ClaudeV2](https://www.anthropic.com/)
@@ -80,13 +79,18 @@ This monorepo consists of three main sections:
- `server`: A nodeJS + express server to handle all the interactions and do all the vectorDB management and LLM interactions.
- `docker`: Docker instructions and build process + information for building from source.
-### Requirements
+### Minimum Requirements
+> [!TIP]
+> Running AnythingLLM on AWS/GCP/Azure?
+> You should aim for at least 2GB of RAM. Disk storage is proprotional to however much data
+> you will be storing (documents, vectors, models, etc). Minimum 10GB recommended.
+
- `yarn` and `node` on your machine
- `python` 3.9+ for running scripts in `collector/`.
- access to an LLM running locally or remotely.
-- (optional) a vector database like Pinecone, qDrant, Weaviate, or Chroma*.
*AnythingLLM by default uses a built-in vector database powered by [LanceDB](https://github.com/lancedb/lancedb)
+
*AnythingLLM by default embeds text on instance privately [Learn More](/server/storage/models/README.md)
## Recommended usage with Docker (easy!)
@@ -107,8 +111,8 @@ docker run -d -p 3001:3001 \
mintplexlabs/anythingllm:master
```
-Go to `http://localhost:3001` and you are now using AnythingLLM! All your data and progress will persist between
-container rebuilds or pulls from Docker Hub.
+Open [http://localhost:3001](http://localhost:3001) and you are now using AnythingLLM!
+All your data and progress will now persist between container rebuilds or pulls from Docker Hub.
[Learn more about running AnythingLLM with Docker](./docker/HOW_TO_USE_DOCKER.md)
diff --git a/docker/Dockerfile b/docker/Dockerfile
index a076df67..87914416 100644
--- a/docker/Dockerfile
+++ b/docker/Dockerfile
@@ -13,7 +13,7 @@ RUN DEBIAN_FRONTEND=noninteractive apt-get update && \
libgcc1 libglib2.0-0 libgtk-3-0 libnspr4 libpango-1.0-0 libx11-6 libx11-xcb1 libxcb1 \
libxcomposite1 libxcursor1 libxdamage1 libxext6 libxfixes3 libxi6 libxrandr2 libxrender1 \
libxss1 libxtst6 ca-certificates fonts-liberation libappindicator1 libnss3 lsb-release \
- xdg-utils && \
+ xdg-utils git build-essential && \
mkdir -p /etc/apt/keyrings && \
curl -fsSL https://deb.nodesource.com/gpgkey/nodesource-repo.gpg.key | gpg --dearmor -o /etc/apt/keyrings/nodesource.gpg && \
echo "deb [signed-by=/etc/apt/keyrings/nodesource.gpg] https://deb.nodesource.com/node_18.x nodistro main" | tee /etc/apt/sources.list.d/nodesource.list && \
@@ -60,6 +60,11 @@ RUN cd ./server/ && yarn install --production && yarn cache clean && \
rm /app/server/node_modules/vectordb/x86_64-apple-darwin.node && \
rm /app/server/node_modules/vectordb/aarch64-apple-darwin.node
+# Compile Llama.cpp bindings for node-llama-cpp for this operating system.
+USER root
+RUN cd ./server && npx --no node-llama-cpp download
+USER anythingllm
+
# Build the frontend
FROM frontend-deps as build-stage
COPY ./frontend/ ./frontend/
diff --git a/frontend/src/components/LLMSelection/NativeLLMOptions/index.jsx b/frontend/src/components/LLMSelection/NativeLLMOptions/index.jsx
new file mode 100644
index 00000000..a41a81fe
--- /dev/null
+++ b/frontend/src/components/LLMSelection/NativeLLMOptions/index.jsx
@@ -0,0 +1,84 @@
+import { useEffect, useState } from "react";
+import { Flask } from "@phosphor-icons/react";
+import System from "@/models/system";
+
+export default function NativeLLMOptions({ settings }) {
+ return (
+
+
+
+
+
+ Using a locally hosted LLM is experimental. Use with caution.
+