Merge branch 'master' of github.com:Mintplex-Labs/anything-llm into render

This commit is contained in:
timothycarambat 2024-02-22 10:07:53 -08:00
commit e60bea1273
4 changed files with 66 additions and 10 deletions

View File

@ -2,10 +2,10 @@
Use the Dockerized version of AnythingLLM for a much faster and complete startup of AnythingLLM.
### Minimum Requirements
> [!TIP]
> Running AnythingLLM on AWS/GCP/Azure?
> Running AnythingLLM on AWS/GCP/Azure?
> You should aim for at least 2GB of RAM. Disk storage is proportional to however much data
> you will be storing (documents, vectors, models, etc). Minimum 10GB recommended.
@ -13,11 +13,12 @@ Use the Dockerized version of AnythingLLM for a much faster and complete startup
- `yarn` and `node` on your machine
- access to an LLM running locally or remotely
*AnythingLLM by default uses a built-in vector database powered by [LanceDB](https://github.com/lancedb/lancedb)
\*AnythingLLM by default uses a built-in vector database powered by [LanceDB](https://github.com/lancedb/lancedb)
*AnythingLLM by default embeds text on instance privately [Learn More](../server/storage/models/README.md)
\*AnythingLLM by default embeds text on instance privately [Learn More](../server/storage/models/README.md)
## Recommend way to run dockerized AnythingLLM!
> [!IMPORTANT]
> If you are running another service on localhost like Chroma, LocalAi, or LMStudio
> you will need to use http://host.docker.internal:xxxx to access the service from within
@ -35,6 +36,7 @@ Use the Dockerized version of AnythingLLM for a much faster and complete startup
> so that you can pull in future updates without deleting your existing data!
Pull in the latest image from docker. Supports both `amd64` and `arm64` CPU architectures.
```shell
docker pull mintplexlabs/anythingllm
```
@ -90,12 +92,15 @@ Go to `http://localhost:3001` and you are now using AnythingLLM! All your data a
container rebuilds or pulls from Docker Hub.
## How to use the user interface
- To access the full application, visit `http://localhost:3001` in your browser.
## About UID and GID in the ENV
- The UID and GID are set to 1000 by default. This is the default user in the Docker container and on most host operating systems. If there is a mismatch between your host user UID and GID and what is set in the `.env` file, you may experience permission issues.
## Build locally from source _not recommended for casual use_
- `git clone` this repo and `cd anything-llm` to get to the root directory.
- `touch server/storage/anythingllm.db` to create empty SQLite DB file.
- `cd docker/`
@ -105,10 +110,13 @@ container rebuilds or pulls from Docker Hub.
Your docker host will show the image as online once the build process is completed. This will build the app to `http://localhost:3001`.
## ⚠️ Vector DB support ⚠️
Out of the box, all vector databases are supported. Any vector databases requiring special configuration are listed below.
### Using local ChromaDB with Dockerized AnythingLLM
- Ensure in your `./docker/.env` file that you have
```
#./docker/.env
...other configs
@ -125,14 +133,24 @@ CHROMA_ENDPOINT='http://host.docker.internal:8000' # Allow docker to look on hos
## Common questions and fixes
### API is not working, cannot login, LLM is "offline"?
You are likely running the docker container on a remote machine like EC2 or some other instance where the reachable URL
is not `http://localhost:3001` and instead is something like `http://193.xx.xx.xx:3001` - in this case all you need to do is add the following to your `frontend/.env.production` before running `docker-compose up -d --build`
```
# frontend/.env.production
GENERATE_SOURCEMAP=false
VITE_API_BASE="http://<YOUR_REACHABLE_IP_ADDRESS>:3001/api"
```
For example, if the docker instance is available on `192.186.1.222` your `VITE_API_BASE` would look like `VITE_API_BASE="http://192.186.1.222:3001/api"` in `frontend/.env.production`.
### Having issues with Ollama?
If you are getting errors like `llama:streaming - could not stream chat. Error: connect ECONNREFUSED 172.17.0.1:11434` then visit the README below.
[Fix common issues with Ollama](../server/utils/AiProviders/ollama/README.md)
### Still not working?
[Ask for help on Discord](https://discord.gg/6UyHPeGZAC)
[Ask for help on Discord](https://discord.gg/6UyHPeGZAC)

View File

@ -59,7 +59,7 @@ export default function App() {
<Route path="/login" element={<Login />} />
<Route
path="/workspace/:slug/settings/:tab"
element={<PrivateRoute Component={WorkspaceSettings} />}
element={<ManagerRoute Component={WorkspaceSettings} />}
/>
<Route
path="/workspace/:slug"

View File

@ -114,10 +114,8 @@ export default function ActiveWorkspaces() {
: truncate(workspace.name, 20)}
</p>
</div>
{isActive ||
isHovered ||
gearHover[workspace.id] ||
user?.role === "default" ? (
{(isActive || isHovered || gearHover[workspace.id]) &&
user?.role !== "default" ? (
<div className="flex items-center gap-x-2">
<button
type="button"

View File

@ -0,0 +1,40 @@
# Common Issues with Ollama
If you encounter an error stating `llama:streaming - could not stream chat. Error: connect ECONNREFUSED 172.17.0.1:11434` when using AnythingLLM in a Docker container, this indicates that the IP of the Host inside of the virtual docker network does not bind to port 11434 of the host system by default, due to Ollama's restriction to localhost and 127.0.0.1. To resolve this issue and ensure proper communication between the Dockerized AnythingLLM and the Ollama service, you must configure Ollama to bind to 0.0.0.0 or a specific IP address.
### Setting Environment Variables on Mac
If Ollama is run as a macOS application, environment variables should be set using `launchctl`:
1. For each environment variable, call `launchctl setenv`.
```bash
launchctl setenv OLLAMA_HOST "0.0.0.0"
```
2. Restart the Ollama application.
### Setting Environment Variables on Linux
If Ollama is run as a systemd service, environment variables should be set using `systemctl`:
1. Edit the systemd service by calling `systemctl edit ollama.service`. This will open an editor.
2. For each environment variable, add a line `Environment` under the section `[Service]`:
```ini
[Service]
Environment="OLLAMA_HOST=0.0.0.0"
```
3. Save and exit.
4. Reload `systemd` and restart Ollama:
```bash
systemctl daemon-reload
systemctl restart ollama
```
### Setting Environment Variables on Windows
On Windows, Ollama inherits your user and system environment variables.
1. First, quit Ollama by clicking on it in the taskbar.
2. Edit system environment variables from the Control Panel.
3. Edit or create new variable(s) for your user account for `OLLAMA_HOST`, `OLLAMA_MODELS`, etc.
4. Click OK/Apply to save.
5. Run `ollama` from a new terminal window.