2023-06-13 22:19:17 +02:00
# How to use Dockerized Anything LLM
Use the Dockerized version of AnythingLLM for a much faster and complete startup of AnythingLLM.
2023-12-19 22:24:50 +01:00
### Minimum Requirements
2024-02-22 03:42:32 +01:00
2023-12-19 22:24:50 +01:00
> [!TIP]
2024-02-22 03:42:32 +01:00
> Running AnythingLLM on AWS/GCP/Azure?
2023-12-19 22:24:50 +01:00
> You should aim for at least 2GB of RAM. Disk storage is proportional to however much data
> you will be storing (documents, vectors, models, etc). Minimum 10GB recommended.
- `docker` installed on your machine
- `yarn` and `node` on your machine
- access to an LLM running locally or remotely
2024-02-22 03:42:32 +01:00
\*AnythingLLM by default uses a built-in vector database powered by [LanceDB ](https://github.com/lancedb/lancedb )
2023-12-19 22:24:50 +01:00
2024-02-22 03:42:32 +01:00
\*AnythingLLM by default embeds text on instance privately [Learn More ](../server/storage/models/README.md )
2023-06-13 22:19:17 +02:00
2023-12-06 20:27:38 +01:00
## Recommend way to run dockerized AnythingLLM!
2024-02-22 03:42:32 +01:00
2023-12-13 20:59:14 +01:00
> [!IMPORTANT]
> If you are running another service on localhost like Chroma, LocalAi, or LMStudio
> you will need to use http://host.docker.internal:xxxx to access the service from within
> the docker container using AnythingLLM as `localhost:xxxx` will not resolve for the host system.
2024-02-19 19:29:47 +01:00
>
> **Requires** Docker v18.03+ on Win/Mac and 20.10+ on Linux/Ubuntu for host.docker.internal to resolve!
>
> _Linux_: add `--add-host=host.docker.internal:host-gateway` to docker run command for this to resolve.
>
2023-12-13 20:59:14 +01:00
> eg: Chroma host URL running on localhost:8000 on host machine needs to be http://host.docker.internal:8000
> when used in AnythingLLM.
2023-12-06 20:27:38 +01:00
> [!TIP]
> It is best to mount the containers storage volume to a folder on your host machine
> so that you can pull in future updates without deleting your existing data!
2024-01-09 01:15:01 +01:00
Pull in the latest image from docker. Supports both `amd64` and `arm64` CPU architectures.
2024-02-22 03:42:32 +01:00
2024-01-09 01:15:01 +01:00
```shell
docker pull mintplexlabs/anythingllm
```
2023-11-18 05:15:11 +01:00
2023-12-20 20:33:00 +01:00
< table >
< tr >
2024-01-09 01:15:01 +01:00
< th colspan = "2" > Mount the storage locally and run AnythingLLM in Docker< / th >
2023-12-20 20:33:00 +01:00
< / tr >
< tr >
2024-01-09 01:15:01 +01:00
< td >
Linux/MacOs
< / td >
2023-12-20 20:33:00 +01:00
< td >
2023-12-06 20:27:38 +01:00
```shell
2023-12-18 00:48:56 +01:00
export STORAGE_LOCATION=$HOME/anythingllm & & \
2023-12-08 00:11:48 +01:00
mkdir -p $STORAGE_LOCATION & & \
2023-12-06 20:27:38 +01:00
touch "$STORAGE_LOCATION/.env" & & \
docker run -d -p 3001:3001 \
2023-12-15 00:14:56 +01:00
--cap-add SYS_ADMIN \
2023-12-06 20:27:38 +01:00
-v ${STORAGE_LOCATION}:/app/server/storage \
-v ${STORAGE_LOCATION}/.env:/app/server/.env \
-e STORAGE_DIR="/app/server/storage" \
2024-01-09 01:15:01 +01:00
mintplexlabs/anythingllm
2023-12-06 20:27:38 +01:00
```
2024-01-09 01:15:01 +01:00
< / td >
< / tr >
< tr >
< td >
Windows
2023-12-20 20:33:00 +01:00
< / td >
< td >
```powershell
2024-01-09 01:15:01 +01:00
# Run this in powershell terminal
2023-12-20 20:40:04 +01:00
$env:STORAGE_LOCATION="$HOME\Documents\anythingllm"; `
If(!(Test-Path $env:STORAGE_LOCATION)) {New-Item $env:STORAGE_LOCATION -ItemType Directory}; `
2024-03-09 01:40:27 +01:00
If(!(Test-Path "$env:STORAGE_LOCATION\.env")) {New-Item "$env:STORAGE_LOCATION\.env" -ItemType File}; `
2023-12-20 20:33:00 +01:00
docker run -d -p 3001:3001 `
--cap-add SYS_ADMIN `
-v "$env:STORAGE_LOCATION`:/app/server/storage" `
-v "$env:STORAGE_LOCATION\.env:/app/server/.env" `
-e STORAGE_DIR="/app/server/storage" `
2024-01-09 01:15:01 +01:00
mintplexlabs/anythingllm;
2023-12-20 20:33:00 +01:00
```
< / td >
< / tr >
< / table >
2023-12-06 20:27:38 +01:00
Go to `http://localhost:3001` and you are now using AnythingLLM! All your data and progress will persist between
container rebuilds or pulls from Docker Hub.
2023-11-18 05:15:11 +01:00
2023-12-19 22:24:50 +01:00
## How to use the user interface
2024-02-22 03:42:32 +01:00
2023-12-19 22:24:50 +01:00
- To access the full application, visit `http://localhost:3001` in your browser.
## About UID and GID in the ENV
2024-02-22 03:42:32 +01:00
2023-12-19 22:24:50 +01:00
- The UID and GID are set to 1000 by default. This is the default user in the Docker container and on most host operating systems. If there is a mismatch between your host user UID and GID and what is set in the `.env` file, you may experience permission issues.
## Build locally from source _not recommended for casual use_
2024-02-22 03:42:32 +01:00
2023-06-13 22:19:17 +02:00
- `git clone` this repo and `cd anything-llm` to get to the root directory.
2023-10-29 19:03:41 +01:00
- `touch server/storage/anythingllm.db` to create empty SQLite DB file.
2023-06-13 22:19:17 +02:00
- `cd docker/`
2023-11-02 06:12:30 +01:00
- `cp .env.example .env` **you must do this before building**
2023-06-16 10:23:32 +02:00
- `docker-compose up -d --build` to build the image - this will take a few moments.
2023-06-14 18:29:11 +02:00
Your docker host will show the image as online once the build process is completed. This will build the app to `http://localhost:3001` .
2023-06-13 22:19:17 +02:00
## ⚠️ Vector DB support ⚠️
2024-02-22 03:42:32 +01:00
2023-06-14 18:29:11 +02:00
Out of the box, all vector databases are supported. Any vector databases requiring special configuration are listed below.
2023-06-13 22:19:17 +02:00
### Using local ChromaDB with Dockerized AnythingLLM
2024-02-22 03:42:32 +01:00
2023-06-13 22:19:17 +02:00
- Ensure in your `./docker/.env` file that you have
2024-02-22 03:42:32 +01:00
2023-06-13 22:19:17 +02:00
```
#./docker/.env
...other configs
VECTOR_DB="chroma"
CHROMA_ENDPOINT='http://host.docker.internal:8000' # Allow docker to look on host port, not container.
2023-09-29 22:20:06 +02:00
# CHROMA_API_HEADER="X-Api-Key" // If you have an Auth middleware on your instance.
# CHROMA_API_KEY="sk-123abc"
2023-06-13 22:19:17 +02:00
...other configs
```
2023-08-15 20:37:03 +02:00
## Common questions and fixes
2023-08-15 20:36:07 +02:00
2023-08-15 20:37:03 +02:00
### API is not working, cannot login, LLM is "offline"?
2024-02-22 03:42:32 +01:00
2023-08-15 20:36:07 +02:00
You are likely running the docker container on a remote machine like EC2 or some other instance where the reachable URL
is not `http://localhost:3001` and instead is something like `http://193.xx.xx.xx:3001` - in this case all you need to do is add the following to your `frontend/.env.production` before running `docker-compose up -d --build`
2024-02-22 03:42:32 +01:00
2023-08-15 20:36:07 +02:00
```
# frontend/.env.production
GENERATE_SOURCEMAP=false
VITE_API_BASE="http://< YOUR_REACHABLE_IP_ADDRESS > :3001/api"
```
2024-02-22 03:42:32 +01:00
2023-08-15 20:36:07 +02:00
For example, if the docker instance is available on `192.186.1.222` your `VITE_API_BASE` would look like `VITE_API_BASE="http://192.186.1.222:3001/api"` in `frontend/.env.production` .
2024-02-22 03:42:32 +01:00
### Having issues with Ollama?
If you are getting errors like `llama:streaming - could not stream chat. Error: connect ECONNREFUSED 172.17.0.1:11434` then visit the README below.
[Fix common issues with Ollama ](../server/utils/AiProviders/ollama/README.md )
2023-08-15 20:37:03 +02:00
### Still not working?
2024-02-22 03:42:32 +01:00
[Ask for help on Discord ](https://discord.gg/6UyHPeGZAC )