From e99c74aec19ccd24b7a869bd674db75a5d695026 Mon Sep 17 00:00:00 2001 From: Sean Hatfield Date: Wed, 21 Feb 2024 18:42:32 -0800 Subject: [PATCH 1/2] [DOCS] Update Docker documentation to show how to setup Ollama with Dockerized version of AnythingLLM (#774) * update HOW_TO_USE_DOCKER to help with Ollama setup using docker * update HOW_TO_USE_DOCKER * styles update * create separate README for ollama and link to it in HOW_TO_USE_DOCKER * styling update --- docker/HOW_TO_USE_DOCKER.md | 28 +++++++++++++--- server/utils/AiProviders/ollama/README.md | 40 +++++++++++++++++++++++ 2 files changed, 63 insertions(+), 5 deletions(-) create mode 100644 server/utils/AiProviders/ollama/README.md diff --git a/docker/HOW_TO_USE_DOCKER.md b/docker/HOW_TO_USE_DOCKER.md index 2dab18411..812119a64 100644 --- a/docker/HOW_TO_USE_DOCKER.md +++ b/docker/HOW_TO_USE_DOCKER.md @@ -2,10 +2,10 @@ Use the Dockerized version of AnythingLLM for a much faster and complete startup of AnythingLLM. - ### Minimum Requirements + > [!TIP] -> Running AnythingLLM on AWS/GCP/Azure? +> Running AnythingLLM on AWS/GCP/Azure? > You should aim for at least 2GB of RAM. Disk storage is proportional to however much data > you will be storing (documents, vectors, models, etc). Minimum 10GB recommended. @@ -13,11 +13,12 @@ Use the Dockerized version of AnythingLLM for a much faster and complete startup - `yarn` and `node` on your machine - access to an LLM running locally or remotely -*AnythingLLM by default uses a built-in vector database powered by [LanceDB](https://github.com/lancedb/lancedb) +\*AnythingLLM by default uses a built-in vector database powered by [LanceDB](https://github.com/lancedb/lancedb) -*AnythingLLM by default embeds text on instance privately [Learn More](../server/storage/models/README.md) +\*AnythingLLM by default embeds text on instance privately [Learn More](../server/storage/models/README.md) ## Recommend way to run dockerized AnythingLLM! + > [!IMPORTANT] > If you are running another service on localhost like Chroma, LocalAi, or LMStudio > you will need to use http://host.docker.internal:xxxx to access the service from within @@ -35,6 +36,7 @@ Use the Dockerized version of AnythingLLM for a much faster and complete startup > so that you can pull in future updates without deleting your existing data! Pull in the latest image from docker. Supports both `amd64` and `arm64` CPU architectures. + ```shell docker pull mintplexlabs/anythingllm ``` @@ -90,12 +92,15 @@ Go to `http://localhost:3001` and you are now using AnythingLLM! All your data a container rebuilds or pulls from Docker Hub. ## How to use the user interface + - To access the full application, visit `http://localhost:3001` in your browser. ## About UID and GID in the ENV + - The UID and GID are set to 1000 by default. This is the default user in the Docker container and on most host operating systems. If there is a mismatch between your host user UID and GID and what is set in the `.env` file, you may experience permission issues. ## Build locally from source _not recommended for casual use_ + - `git clone` this repo and `cd anything-llm` to get to the root directory. - `touch server/storage/anythingllm.db` to create empty SQLite DB file. - `cd docker/` @@ -105,10 +110,13 @@ container rebuilds or pulls from Docker Hub. Your docker host will show the image as online once the build process is completed. This will build the app to `http://localhost:3001`. ## ⚠️ Vector DB support ⚠️ + Out of the box, all vector databases are supported. Any vector databases requiring special configuration are listed below. ### Using local ChromaDB with Dockerized AnythingLLM + - Ensure in your `./docker/.env` file that you have + ``` #./docker/.env ...other configs @@ -125,14 +133,24 @@ CHROMA_ENDPOINT='http://host.docker.internal:8000' # Allow docker to look on hos ## Common questions and fixes ### API is not working, cannot login, LLM is "offline"? + You are likely running the docker container on a remote machine like EC2 or some other instance where the reachable URL is not `http://localhost:3001` and instead is something like `http://193.xx.xx.xx:3001` - in this case all you need to do is add the following to your `frontend/.env.production` before running `docker-compose up -d --build` + ``` # frontend/.env.production GENERATE_SOURCEMAP=false VITE_API_BASE="http://:3001/api" ``` + For example, if the docker instance is available on `192.186.1.222` your `VITE_API_BASE` would look like `VITE_API_BASE="http://192.186.1.222:3001/api"` in `frontend/.env.production`. +### Having issues with Ollama? + +If you are getting errors like `llama:streaming - could not stream chat. Error: connect ECONNREFUSED 172.17.0.1:11434` then visit the README below. + +[Fix common issues with Ollama](../server/utils/AiProviders/ollama/README.md) + ### Still not working? -[Ask for help on Discord](https://discord.gg/6UyHPeGZAC) \ No newline at end of file + +[Ask for help on Discord](https://discord.gg/6UyHPeGZAC) diff --git a/server/utils/AiProviders/ollama/README.md b/server/utils/AiProviders/ollama/README.md new file mode 100644 index 000000000..9e96b2ed0 --- /dev/null +++ b/server/utils/AiProviders/ollama/README.md @@ -0,0 +1,40 @@ +# Common Issues with Ollama + +If you encounter an error stating `llama:streaming - could not stream chat. Error: connect ECONNREFUSED 172.17.0.1:11434` when using AnythingLLM in a Docker container, this indicates that the IP of the Host inside of the virtual docker network does not bind to port 11434 of the host system by default, due to Ollama's restriction to localhost and 127.0.0.1. To resolve this issue and ensure proper communication between the Dockerized AnythingLLM and the Ollama service, you must configure Ollama to bind to 0.0.0.0 or a specific IP address. + +### Setting Environment Variables on Mac + +If Ollama is run as a macOS application, environment variables should be set using `launchctl`: + +1. For each environment variable, call `launchctl setenv`. + ```bash + launchctl setenv OLLAMA_HOST "0.0.0.0" + ``` +2. Restart the Ollama application. + +### Setting Environment Variables on Linux + +If Ollama is run as a systemd service, environment variables should be set using `systemctl`: + +1. Edit the systemd service by calling `systemctl edit ollama.service`. This will open an editor. +2. For each environment variable, add a line `Environment` under the section `[Service]`: + ```ini + [Service] + Environment="OLLAMA_HOST=0.0.0.0" + ``` +3. Save and exit. +4. Reload `systemd` and restart Ollama: + ```bash + systemctl daemon-reload + systemctl restart ollama + ``` + +### Setting Environment Variables on Windows + +On Windows, Ollama inherits your user and system environment variables. + +1. First, quit Ollama by clicking on it in the taskbar. +2. Edit system environment variables from the Control Panel. +3. Edit or create new variable(s) for your user account for `OLLAMA_HOST`, `OLLAMA_MODELS`, etc. +4. Click OK/Apply to save. +5. Run `ollama` from a new terminal window. From 424ca142c195bfae1852f0e93433df02305e90dc Mon Sep 17 00:00:00 2001 From: Timothy Carambat Date: Thu, 22 Feb 2024 09:15:27 -0800 Subject: [PATCH 2/2] Fix default role visibility permissions (#776) --- frontend/src/App.jsx | 2 +- frontend/src/components/Sidebar/ActiveWorkspaces/index.jsx | 6 ++---- 2 files changed, 3 insertions(+), 5 deletions(-) diff --git a/frontend/src/App.jsx b/frontend/src/App.jsx index 7633af2c4..86f6eb08a 100644 --- a/frontend/src/App.jsx +++ b/frontend/src/App.jsx @@ -59,7 +59,7 @@ export default function App() { } /> } + element={} /> - {isActive || - isHovered || - gearHover[workspace.id] || - user?.role === "default" ? ( + {(isActive || isHovered || gearHover[workspace.id]) && + user?.role !== "default" ? (