diff --git a/docker/HOW_TO_USE_DOCKER.md b/docker/HOW_TO_USE_DOCKER.md index 2dab18411..812119a64 100644 --- a/docker/HOW_TO_USE_DOCKER.md +++ b/docker/HOW_TO_USE_DOCKER.md @@ -2,10 +2,10 @@ Use the Dockerized version of AnythingLLM for a much faster and complete startup of AnythingLLM. - ### Minimum Requirements + > [!TIP] -> Running AnythingLLM on AWS/GCP/Azure? +> Running AnythingLLM on AWS/GCP/Azure? > You should aim for at least 2GB of RAM. Disk storage is proportional to however much data > you will be storing (documents, vectors, models, etc). Minimum 10GB recommended. @@ -13,11 +13,12 @@ Use the Dockerized version of AnythingLLM for a much faster and complete startup - `yarn` and `node` on your machine - access to an LLM running locally or remotely -*AnythingLLM by default uses a built-in vector database powered by [LanceDB](https://github.com/lancedb/lancedb) +\*AnythingLLM by default uses a built-in vector database powered by [LanceDB](https://github.com/lancedb/lancedb) -*AnythingLLM by default embeds text on instance privately [Learn More](../server/storage/models/README.md) +\*AnythingLLM by default embeds text on instance privately [Learn More](../server/storage/models/README.md) ## Recommend way to run dockerized AnythingLLM! + > [!IMPORTANT] > If you are running another service on localhost like Chroma, LocalAi, or LMStudio > you will need to use http://host.docker.internal:xxxx to access the service from within @@ -35,6 +36,7 @@ Use the Dockerized version of AnythingLLM for a much faster and complete startup > so that you can pull in future updates without deleting your existing data! Pull in the latest image from docker. Supports both `amd64` and `arm64` CPU architectures. + ```shell docker pull mintplexlabs/anythingllm ``` @@ -90,12 +92,15 @@ Go to `http://localhost:3001` and you are now using AnythingLLM! All your data a container rebuilds or pulls from Docker Hub. ## How to use the user interface + - To access the full application, visit `http://localhost:3001` in your browser. ## About UID and GID in the ENV + - The UID and GID are set to 1000 by default. This is the default user in the Docker container and on most host operating systems. If there is a mismatch between your host user UID and GID and what is set in the `.env` file, you may experience permission issues. ## Build locally from source _not recommended for casual use_ + - `git clone` this repo and `cd anything-llm` to get to the root directory. - `touch server/storage/anythingllm.db` to create empty SQLite DB file. - `cd docker/` @@ -105,10 +110,13 @@ container rebuilds or pulls from Docker Hub. Your docker host will show the image as online once the build process is completed. This will build the app to `http://localhost:3001`. ## ⚠️ Vector DB support ⚠️ + Out of the box, all vector databases are supported. Any vector databases requiring special configuration are listed below. ### Using local ChromaDB with Dockerized AnythingLLM + - Ensure in your `./docker/.env` file that you have + ``` #./docker/.env ...other configs @@ -125,14 +133,24 @@ CHROMA_ENDPOINT='http://host.docker.internal:8000' # Allow docker to look on hos ## Common questions and fixes ### API is not working, cannot login, LLM is "offline"? + You are likely running the docker container on a remote machine like EC2 or some other instance where the reachable URL is not `http://localhost:3001` and instead is something like `http://193.xx.xx.xx:3001` - in this case all you need to do is add the following to your `frontend/.env.production` before running `docker-compose up -d --build` + ``` # frontend/.env.production GENERATE_SOURCEMAP=false VITE_API_BASE="http://:3001/api" ``` + For example, if the docker instance is available on `192.186.1.222` your `VITE_API_BASE` would look like `VITE_API_BASE="http://192.186.1.222:3001/api"` in `frontend/.env.production`. +### Having issues with Ollama? + +If you are getting errors like `llama:streaming - could not stream chat. Error: connect ECONNREFUSED 172.17.0.1:11434` then visit the README below. + +[Fix common issues with Ollama](../server/utils/AiProviders/ollama/README.md) + ### Still not working? -[Ask for help on Discord](https://discord.gg/6UyHPeGZAC) \ No newline at end of file + +[Ask for help on Discord](https://discord.gg/6UyHPeGZAC) diff --git a/frontend/src/App.jsx b/frontend/src/App.jsx index 7633af2c4..86f6eb08a 100644 --- a/frontend/src/App.jsx +++ b/frontend/src/App.jsx @@ -59,7 +59,7 @@ export default function App() { } /> } + element={} /> - {isActive || - isHovered || - gearHover[workspace.id] || - user?.role === "default" ? ( + {(isActive || isHovered || gearHover[workspace.id]) && + user?.role !== "default" ? (