resolves #1340
4.7 KiB
Run AnythingLLM in production without Docker
Warning
This method of deployment is not supported by the core-team and is to be used as a reference for your deployment. You are fully responsible for securing your deployment and data in this mode. Any issues experienced from bare-metal or non-containerized deployments will be not answered or supported.
Here you can find the scripts and known working process to run AnythingLLM outside of a Docker container. This method of deployment is preferable for those using local LLMs and want native performance on their devices.
Minimum Requirements
Tip
You should aim for at least 2GB of RAM. Disk storage is proportional to however much data you will be storing (documents, vectors, models, etc). Minimum 10GB recommended.
- NodeJS v18
- Yarn
Getting started
-
Clone the repo into your server as the user who the application will run as.
git clone git@github.com:Mintplex-Labs/anything-llm.git
-
cd anything-llm
and runyarn setup
. This will install all dependencies to run in production as well as debug the application. -
cp server/.env.example server/.env
to create the basic ENV file for where instance settings will be read from on service start. -
Ensure that the
server/.env
file has at least these keys to start. These values will persist and this file will be automatically written and managed after your first successful boot.
STORAGE_DIR="/your/absolute/path/to/server/storage"
- Edit the
frontend/.env
file for theVITE_BASE_API
to now be set to/api
. This is documented in the .env for which one you should use.
# VITE_API_BASE='http://localhost:3001/api' # Use this URL when developing locally
# VITE_API_BASE="https://$CODESPACE_NAME-3001.$GITHUB_CODESPACES_PORT_FORWARDING_DOMAIN/api" # for Github Codespaces
VITE_API_BASE='/api' # Use this URL deploying on non-localhost address OR in docker.
To start the application
AnythingLLM is comprised of three main sections. The frontend
, server
, and collector
. When running in production you will be running server
and collector
on two different processes, with a build step for compilation of the frontend.
-
Build the frontend application.
cd frontend && yarn build
- this will produce afrontend/dist
folder that will be used later. -
Copy
frontend/dist
toserver/public
-cp -R frontend/dist server/public
. This should create a folder inserver
namedpublic
which contains a top levelindex.html
file and various other files/folders.
(optional) Build native LLM support if using native
as your LLM.
cd server && npx --no node-llama-cpp download
- Migrate and prepare your database file.
cd server && npx prisma generate --schema=./prisma/schema.prisma
cd server && npx prisma migrate deploy --schema=./prisma/schema.prisma
-
Boot the server in production
cd server && NODE_ENV=production node index.js &
-
Boot the collection in another process
cd collector && NODE_ENV=production node index.js &
AnythingLLM should now be running on http://localhost:3001
!
Updating AnythingLLM
To update AnythingLLM with future updates you can git pull origin master
to pull in the latest code and then repeat steps 2 - 5 to deploy with all changes fully.
note You should ensure that each folder runs yarn
again to ensure packages are up to date in case any dependencies were added, changed, or removed.
note You should pkill node
before running an update so that you are not running multiple AnythingLLM processes on the same instance as this can cause conflicts.
Example update script
#!/bin/bash
cd $HOME/anything-llm &&\
git checkout . &&\
git pull origin master &&\
echo "HEAD pulled to commit $(git log -1 --pretty=format:"%h" | tail -n 1)"
echo "Freezing current ENVs"
curl -I "http://localhost:3001/api/env-dump" | head -n 1|cut -d$' ' -f2
echo "Rebuilding Frontend"
cd $HOME/anything-llm/frontend && yarn && yarn build && cd $HOME/anything-llm
echo "Copying to Sever Public"
rm -rf server/public
cp -r frontend/dist server/public
echo "Killing node processes"
pkill node
echo "Installing collector dependencies"
cd $HOME/anything-llm/collector && yarn
echo "Installing server dependencies & running migrations"
cd $HOME/anything-llm/server && yarn
cd $HOME/anything-llm/server && npx prisma migrate deploy --schema=./prisma/schema.prisma
cd $HOME/anything-llm/server && npx prisma generate
echo "Booting up services."
truncate -s 0 /logs/server.log # Or any other log file location.
truncate -s 0 /logs/collector.log
cd $HOME/anything-llm/server
(NODE_ENV=production node index.js) &> /logs/server.log &
cd $HOME/anything-llm/collector
(NODE_ENV=production node index.js) &> /logs/collector.log &