2023-06-13 22:19:17 +02:00
# How to use Dockerized Anything LLM
Use the Dockerized version of AnythingLLM for a much faster and complete startup of AnythingLLM.
2023-12-19 22:24:50 +01:00
### Minimum Requirements
2024-02-22 03:42:32 +01:00
2023-12-19 22:24:50 +01:00
> [!TIP]
2024-02-22 03:42:32 +01:00
> Running AnythingLLM on AWS/GCP/Azure?
2023-12-19 22:24:50 +01:00
> You should aim for at least 2GB of RAM. Disk storage is proportional to however much data
> you will be storing (documents, vectors, models, etc). Minimum 10GB recommended.
- `docker` installed on your machine
- `yarn` and `node` on your machine
- access to an LLM running locally or remotely
2024-02-22 03:42:32 +01:00
\*AnythingLLM by default uses a built-in vector database powered by [LanceDB ](https://github.com/lancedb/lancedb )
2023-12-19 22:24:50 +01:00
2024-02-22 03:42:32 +01:00
\*AnythingLLM by default embeds text on instance privately [Learn More ](../server/storage/models/README.md )
2023-06-13 22:19:17 +02:00
2023-12-06 20:27:38 +01:00
## Recommend way to run dockerized AnythingLLM!
2024-02-22 03:42:32 +01:00
2023-12-13 20:59:14 +01:00
> [!IMPORTANT]
> If you are running another service on localhost like Chroma, LocalAi, or LMStudio
> you will need to use http://host.docker.internal:xxxx to access the service from within
> the docker container using AnythingLLM as `localhost:xxxx` will not resolve for the host system.
2024-02-19 19:29:47 +01:00
>
> **Requires** Docker v18.03+ on Win/Mac and 20.10+ on Linux/Ubuntu for host.docker.internal to resolve!
>
> _Linux_: add `--add-host=host.docker.internal:host-gateway` to docker run command for this to resolve.
>
2023-12-13 20:59:14 +01:00
> eg: Chroma host URL running on localhost:8000 on host machine needs to be http://host.docker.internal:8000
> when used in AnythingLLM.
2023-12-06 20:27:38 +01:00
> [!TIP]
> It is best to mount the containers storage volume to a folder on your host machine
> so that you can pull in future updates without deleting your existing data!
2024-01-09 01:15:01 +01:00
Pull in the latest image from docker. Supports both `amd64` and `arm64` CPU architectures.
2024-02-22 03:42:32 +01:00
2024-01-09 01:15:01 +01:00
```shell
docker pull mintplexlabs/anythingllm
```
2023-11-18 05:15:11 +01:00
2023-12-20 20:33:00 +01:00
< table >
< tr >
2024-01-09 01:15:01 +01:00
< th colspan = "2" > Mount the storage locally and run AnythingLLM in Docker< / th >
2023-12-20 20:33:00 +01:00
< / tr >
< tr >
2024-01-09 01:15:01 +01:00
< td >
Linux/MacOs
< / td >
2023-12-20 20:33:00 +01:00
< td >
2023-12-06 20:27:38 +01:00
```shell
2023-12-18 00:48:56 +01:00
export STORAGE_LOCATION=$HOME/anythingllm & & \
2023-12-08 00:11:48 +01:00
mkdir -p $STORAGE_LOCATION & & \
2023-12-06 20:27:38 +01:00
touch "$STORAGE_LOCATION/.env" & & \
docker run -d -p 3001:3001 \
2023-12-15 00:14:56 +01:00
--cap-add SYS_ADMIN \
2023-12-06 20:27:38 +01:00
-v ${STORAGE_LOCATION}:/app/server/storage \
-v ${STORAGE_LOCATION}/.env:/app/server/.env \
-e STORAGE_DIR="/app/server/storage" \
2024-01-09 01:15:01 +01:00
mintplexlabs/anythingllm
2023-12-06 20:27:38 +01:00
```
2024-01-09 01:15:01 +01:00
< / td >
< / tr >
< tr >
< td >
Windows
2023-12-20 20:33:00 +01:00
< / td >
< td >
```powershell
2024-01-09 01:15:01 +01:00
# Run this in powershell terminal
2023-12-20 20:40:04 +01:00
$env:STORAGE_LOCATION="$HOME\Documents\anythingllm"; `
If(!(Test-Path $env:STORAGE_LOCATION)) {New-Item $env:STORAGE_LOCATION -ItemType Directory}; `
2024-03-09 01:40:27 +01:00
If(!(Test-Path "$env:STORAGE_LOCATION\.env")) {New-Item "$env:STORAGE_LOCATION\.env" -ItemType File}; `
2023-12-20 20:33:00 +01:00
docker run -d -p 3001:3001 `
--cap-add SYS_ADMIN `
-v "$env:STORAGE_LOCATION`:/app/server/storage" `
-v "$env:STORAGE_LOCATION\.env:/app/server/.env" `
-e STORAGE_DIR="/app/server/storage" `
2024-01-09 01:15:01 +01:00
mintplexlabs/anythingllm;
2023-12-20 20:33:00 +01:00
```
< / td >
< / tr >
2024-06-10 17:42:48 +02:00
< tr >
2024-06-17 22:31:45 +02:00
< td > Docker Compose< / td >
< td >
```yaml
version: '3.8'
services:
anythingllm:
image: mintplexlabs/anythingllm
container_name: anythingllm
ports:
- "3001:3001"
cap_add:
- SYS_ADMIN
environment:
2024-06-24 18:32:42 +02:00
# Adjust for your environment
2024-06-17 22:31:45 +02:00
- STORAGE_DIR=/app/server/storage
- JWT_SECRET="make this a large list of random numbers and letters 20+"
- LLM_PROVIDER=ollama
- OLLAMA_BASE_PATH=http://127.0.0.1:11434
- OLLAMA_MODEL_PREF=llama2
- OLLAMA_MODEL_TOKEN_LIMIT=4096
- EMBEDDING_ENGINE=ollama
- EMBEDDING_BASE_PATH=http://127.0.0.1:11434
- EMBEDDING_MODEL_PREF=nomic-embed-text:latest
- EMBEDDING_MODEL_MAX_CHUNK_LENGTH=8192
- VECTOR_DB=lancedb
- WHISPER_PROVIDER=local
- TTS_PROVIDER=native
- PASSWORDMINCHAR=8
2024-09-05 19:36:46 +02:00
# Add any other keys here for services or settings
# you can find in the docker/.env.example file
2024-06-17 22:31:45 +02:00
volumes:
- anythingllm_storage:/app/server/storage
restart: always
volumes:
anythingllm_storage:
driver: local
driver_opts:
type: none
o: bind
device: /path/on/local/disk
```
2024-06-10 17:42:48 +02:00
< / td >
< / tr >
2023-12-20 20:33:00 +01:00
< / table >
2023-12-06 20:27:38 +01:00
Go to `http://localhost:3001` and you are now using AnythingLLM! All your data and progress will persist between
container rebuilds or pulls from Docker Hub.
2023-11-18 05:15:11 +01:00
2023-12-19 22:24:50 +01:00
## How to use the user interface
2024-02-22 03:42:32 +01:00
2023-12-19 22:24:50 +01:00
- To access the full application, visit `http://localhost:3001` in your browser.
## About UID and GID in the ENV
2024-02-22 03:42:32 +01:00
2023-12-19 22:24:50 +01:00
- The UID and GID are set to 1000 by default. This is the default user in the Docker container and on most host operating systems. If there is a mismatch between your host user UID and GID and what is set in the `.env` file, you may experience permission issues.
## Build locally from source _not recommended for casual use_
2024-02-22 03:42:32 +01:00
2023-06-13 22:19:17 +02:00
- `git clone` this repo and `cd anything-llm` to get to the root directory.
2023-10-29 19:03:41 +01:00
- `touch server/storage/anythingllm.db` to create empty SQLite DB file.
2023-06-13 22:19:17 +02:00
- `cd docker/`
2023-11-02 06:12:30 +01:00
- `cp .env.example .env` **you must do this before building**
2023-06-16 10:23:32 +02:00
- `docker-compose up -d --build` to build the image - this will take a few moments.
2023-06-14 18:29:11 +02:00
Your docker host will show the image as online once the build process is completed. This will build the app to `http://localhost:3001` .
2023-06-13 22:19:17 +02:00
2024-03-26 01:44:16 +01:00
## Integrations and one-click setups
The integrations below are templates or tooling built by the community to make running the docker experience of AnythingLLM easier.
### Use the Midori AI Subsystem to Manage AnythingLLM
Follow the setup found on [Midori AI Subsystem Site ](https://io.midori-ai.xyz/subsystem/manager/ ) for your host OS
After setting that up install the AnythingLLM docker backend to the Midori AI Subsystem.
Once that is done, you are all set!
2024-03-19 17:04:33 +01:00
## Common questions and fixes
2024-02-22 03:42:32 +01:00
2024-03-19 17:04:33 +01:00
### Cannot connect to service running on localhost!
2024-02-22 03:42:32 +01:00
2024-03-19 17:04:33 +01:00
If you are in docker and cannot connect to a service running on your host machine running on a local interface or loopback:
2023-06-13 22:19:17 +02:00
2024-03-19 17:04:33 +01:00
- `localhost`
- `127.0.0.1`
- `0.0.0.0`
2023-06-13 22:19:17 +02:00
2024-03-19 17:04:33 +01:00
> [!IMPORTANT]
> On linux `http://host.docker.internal:xxxx` does not work.
> Use `http://172.17.0.1:xxxx` instead to emulate this functionality.
2023-06-13 22:19:17 +02:00
2024-03-19 17:04:33 +01:00
Then in docker you need to replace that localhost part with `host.docker.internal` . For example, if running Ollama on the host machine, bound to http://127.0.0.1:11434 you should put `http://host.docker.internal:11434` into the connection URL in AnythingLLM.
2023-06-13 22:19:17 +02:00
2023-08-15 20:36:07 +02:00
2023-08-15 20:37:03 +02:00
### API is not working, cannot login, LLM is "offline"?
2024-02-22 03:42:32 +01:00
2023-08-15 20:36:07 +02:00
You are likely running the docker container on a remote machine like EC2 or some other instance where the reachable URL
is not `http://localhost:3001` and instead is something like `http://193.xx.xx.xx:3001` - in this case all you need to do is add the following to your `frontend/.env.production` before running `docker-compose up -d --build`
2024-02-22 03:42:32 +01:00
2023-08-15 20:36:07 +02:00
```
# frontend/.env.production
GENERATE_SOURCEMAP=false
VITE_API_BASE="http://< YOUR_REACHABLE_IP_ADDRESS > :3001/api"
```
2024-02-22 03:42:32 +01:00
2023-08-15 20:36:07 +02:00
For example, if the docker instance is available on `192.186.1.222` your `VITE_API_BASE` would look like `VITE_API_BASE="http://192.186.1.222:3001/api"` in `frontend/.env.production` .
2024-02-22 03:42:32 +01:00
### Having issues with Ollama?
If you are getting errors like `llama:streaming - could not stream chat. Error: connect ECONNREFUSED 172.17.0.1:11434` then visit the README below.
[Fix common issues with Ollama ](../server/utils/AiProviders/ollama/README.md )
2023-08-15 20:37:03 +02:00
### Still not working?
2024-02-22 03:42:32 +01:00
[Ask for help on Discord ](https://discord.gg/6UyHPeGZAC )