mirror of
https://github.com/Mintplex-Labs/anything-llm.git
synced 2024-11-19 20:50:09 +01:00
Merge branch 'master' of github.com:Mintplex-Labs/anything-llm into render
This commit is contained in:
commit
e60bea1273
@ -2,10 +2,10 @@
|
|||||||
|
|
||||||
Use the Dockerized version of AnythingLLM for a much faster and complete startup of AnythingLLM.
|
Use the Dockerized version of AnythingLLM for a much faster and complete startup of AnythingLLM.
|
||||||
|
|
||||||
|
|
||||||
### Minimum Requirements
|
### Minimum Requirements
|
||||||
|
|
||||||
> [!TIP]
|
> [!TIP]
|
||||||
> Running AnythingLLM on AWS/GCP/Azure?
|
> Running AnythingLLM on AWS/GCP/Azure?
|
||||||
> You should aim for at least 2GB of RAM. Disk storage is proportional to however much data
|
> You should aim for at least 2GB of RAM. Disk storage is proportional to however much data
|
||||||
> you will be storing (documents, vectors, models, etc). Minimum 10GB recommended.
|
> you will be storing (documents, vectors, models, etc). Minimum 10GB recommended.
|
||||||
|
|
||||||
@ -13,11 +13,12 @@ Use the Dockerized version of AnythingLLM for a much faster and complete startup
|
|||||||
- `yarn` and `node` on your machine
|
- `yarn` and `node` on your machine
|
||||||
- access to an LLM running locally or remotely
|
- access to an LLM running locally or remotely
|
||||||
|
|
||||||
*AnythingLLM by default uses a built-in vector database powered by [LanceDB](https://github.com/lancedb/lancedb)
|
\*AnythingLLM by default uses a built-in vector database powered by [LanceDB](https://github.com/lancedb/lancedb)
|
||||||
|
|
||||||
*AnythingLLM by default embeds text on instance privately [Learn More](../server/storage/models/README.md)
|
\*AnythingLLM by default embeds text on instance privately [Learn More](../server/storage/models/README.md)
|
||||||
|
|
||||||
## Recommend way to run dockerized AnythingLLM!
|
## Recommend way to run dockerized AnythingLLM!
|
||||||
|
|
||||||
> [!IMPORTANT]
|
> [!IMPORTANT]
|
||||||
> If you are running another service on localhost like Chroma, LocalAi, or LMStudio
|
> If you are running another service on localhost like Chroma, LocalAi, or LMStudio
|
||||||
> you will need to use http://host.docker.internal:xxxx to access the service from within
|
> you will need to use http://host.docker.internal:xxxx to access the service from within
|
||||||
@ -35,6 +36,7 @@ Use the Dockerized version of AnythingLLM for a much faster and complete startup
|
|||||||
> so that you can pull in future updates without deleting your existing data!
|
> so that you can pull in future updates without deleting your existing data!
|
||||||
|
|
||||||
Pull in the latest image from docker. Supports both `amd64` and `arm64` CPU architectures.
|
Pull in the latest image from docker. Supports both `amd64` and `arm64` CPU architectures.
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
docker pull mintplexlabs/anythingllm
|
docker pull mintplexlabs/anythingllm
|
||||||
```
|
```
|
||||||
@ -90,12 +92,15 @@ Go to `http://localhost:3001` and you are now using AnythingLLM! All your data a
|
|||||||
container rebuilds or pulls from Docker Hub.
|
container rebuilds or pulls from Docker Hub.
|
||||||
|
|
||||||
## How to use the user interface
|
## How to use the user interface
|
||||||
|
|
||||||
- To access the full application, visit `http://localhost:3001` in your browser.
|
- To access the full application, visit `http://localhost:3001` in your browser.
|
||||||
|
|
||||||
## About UID and GID in the ENV
|
## About UID and GID in the ENV
|
||||||
|
|
||||||
- The UID and GID are set to 1000 by default. This is the default user in the Docker container and on most host operating systems. If there is a mismatch between your host user UID and GID and what is set in the `.env` file, you may experience permission issues.
|
- The UID and GID are set to 1000 by default. This is the default user in the Docker container and on most host operating systems. If there is a mismatch between your host user UID and GID and what is set in the `.env` file, you may experience permission issues.
|
||||||
|
|
||||||
## Build locally from source _not recommended for casual use_
|
## Build locally from source _not recommended for casual use_
|
||||||
|
|
||||||
- `git clone` this repo and `cd anything-llm` to get to the root directory.
|
- `git clone` this repo and `cd anything-llm` to get to the root directory.
|
||||||
- `touch server/storage/anythingllm.db` to create empty SQLite DB file.
|
- `touch server/storage/anythingllm.db` to create empty SQLite DB file.
|
||||||
- `cd docker/`
|
- `cd docker/`
|
||||||
@ -105,10 +110,13 @@ container rebuilds or pulls from Docker Hub.
|
|||||||
Your docker host will show the image as online once the build process is completed. This will build the app to `http://localhost:3001`.
|
Your docker host will show the image as online once the build process is completed. This will build the app to `http://localhost:3001`.
|
||||||
|
|
||||||
## ⚠️ Vector DB support ⚠️
|
## ⚠️ Vector DB support ⚠️
|
||||||
|
|
||||||
Out of the box, all vector databases are supported. Any vector databases requiring special configuration are listed below.
|
Out of the box, all vector databases are supported. Any vector databases requiring special configuration are listed below.
|
||||||
|
|
||||||
### Using local ChromaDB with Dockerized AnythingLLM
|
### Using local ChromaDB with Dockerized AnythingLLM
|
||||||
|
|
||||||
- Ensure in your `./docker/.env` file that you have
|
- Ensure in your `./docker/.env` file that you have
|
||||||
|
|
||||||
```
|
```
|
||||||
#./docker/.env
|
#./docker/.env
|
||||||
...other configs
|
...other configs
|
||||||
@ -125,14 +133,24 @@ CHROMA_ENDPOINT='http://host.docker.internal:8000' # Allow docker to look on hos
|
|||||||
## Common questions and fixes
|
## Common questions and fixes
|
||||||
|
|
||||||
### API is not working, cannot login, LLM is "offline"?
|
### API is not working, cannot login, LLM is "offline"?
|
||||||
|
|
||||||
You are likely running the docker container on a remote machine like EC2 or some other instance where the reachable URL
|
You are likely running the docker container on a remote machine like EC2 or some other instance where the reachable URL
|
||||||
is not `http://localhost:3001` and instead is something like `http://193.xx.xx.xx:3001` - in this case all you need to do is add the following to your `frontend/.env.production` before running `docker-compose up -d --build`
|
is not `http://localhost:3001` and instead is something like `http://193.xx.xx.xx:3001` - in this case all you need to do is add the following to your `frontend/.env.production` before running `docker-compose up -d --build`
|
||||||
|
|
||||||
```
|
```
|
||||||
# frontend/.env.production
|
# frontend/.env.production
|
||||||
GENERATE_SOURCEMAP=false
|
GENERATE_SOURCEMAP=false
|
||||||
VITE_API_BASE="http://<YOUR_REACHABLE_IP_ADDRESS>:3001/api"
|
VITE_API_BASE="http://<YOUR_REACHABLE_IP_ADDRESS>:3001/api"
|
||||||
```
|
```
|
||||||
|
|
||||||
For example, if the docker instance is available on `192.186.1.222` your `VITE_API_BASE` would look like `VITE_API_BASE="http://192.186.1.222:3001/api"` in `frontend/.env.production`.
|
For example, if the docker instance is available on `192.186.1.222` your `VITE_API_BASE` would look like `VITE_API_BASE="http://192.186.1.222:3001/api"` in `frontend/.env.production`.
|
||||||
|
|
||||||
|
### Having issues with Ollama?
|
||||||
|
|
||||||
|
If you are getting errors like `llama:streaming - could not stream chat. Error: connect ECONNREFUSED 172.17.0.1:11434` then visit the README below.
|
||||||
|
|
||||||
|
[Fix common issues with Ollama](../server/utils/AiProviders/ollama/README.md)
|
||||||
|
|
||||||
### Still not working?
|
### Still not working?
|
||||||
[Ask for help on Discord](https://discord.gg/6UyHPeGZAC)
|
|
||||||
|
[Ask for help on Discord](https://discord.gg/6UyHPeGZAC)
|
||||||
|
@ -59,7 +59,7 @@ export default function App() {
|
|||||||
<Route path="/login" element={<Login />} />
|
<Route path="/login" element={<Login />} />
|
||||||
<Route
|
<Route
|
||||||
path="/workspace/:slug/settings/:tab"
|
path="/workspace/:slug/settings/:tab"
|
||||||
element={<PrivateRoute Component={WorkspaceSettings} />}
|
element={<ManagerRoute Component={WorkspaceSettings} />}
|
||||||
/>
|
/>
|
||||||
<Route
|
<Route
|
||||||
path="/workspace/:slug"
|
path="/workspace/:slug"
|
||||||
|
@ -114,10 +114,8 @@ export default function ActiveWorkspaces() {
|
|||||||
: truncate(workspace.name, 20)}
|
: truncate(workspace.name, 20)}
|
||||||
</p>
|
</p>
|
||||||
</div>
|
</div>
|
||||||
{isActive ||
|
{(isActive || isHovered || gearHover[workspace.id]) &&
|
||||||
isHovered ||
|
user?.role !== "default" ? (
|
||||||
gearHover[workspace.id] ||
|
|
||||||
user?.role === "default" ? (
|
|
||||||
<div className="flex items-center gap-x-2">
|
<div className="flex items-center gap-x-2">
|
||||||
<button
|
<button
|
||||||
type="button"
|
type="button"
|
||||||
|
40
server/utils/AiProviders/ollama/README.md
Normal file
40
server/utils/AiProviders/ollama/README.md
Normal file
@ -0,0 +1,40 @@
|
|||||||
|
# Common Issues with Ollama
|
||||||
|
|
||||||
|
If you encounter an error stating `llama:streaming - could not stream chat. Error: connect ECONNREFUSED 172.17.0.1:11434` when using AnythingLLM in a Docker container, this indicates that the IP of the Host inside of the virtual docker network does not bind to port 11434 of the host system by default, due to Ollama's restriction to localhost and 127.0.0.1. To resolve this issue and ensure proper communication between the Dockerized AnythingLLM and the Ollama service, you must configure Ollama to bind to 0.0.0.0 or a specific IP address.
|
||||||
|
|
||||||
|
### Setting Environment Variables on Mac
|
||||||
|
|
||||||
|
If Ollama is run as a macOS application, environment variables should be set using `launchctl`:
|
||||||
|
|
||||||
|
1. For each environment variable, call `launchctl setenv`.
|
||||||
|
```bash
|
||||||
|
launchctl setenv OLLAMA_HOST "0.0.0.0"
|
||||||
|
```
|
||||||
|
2. Restart the Ollama application.
|
||||||
|
|
||||||
|
### Setting Environment Variables on Linux
|
||||||
|
|
||||||
|
If Ollama is run as a systemd service, environment variables should be set using `systemctl`:
|
||||||
|
|
||||||
|
1. Edit the systemd service by calling `systemctl edit ollama.service`. This will open an editor.
|
||||||
|
2. For each environment variable, add a line `Environment` under the section `[Service]`:
|
||||||
|
```ini
|
||||||
|
[Service]
|
||||||
|
Environment="OLLAMA_HOST=0.0.0.0"
|
||||||
|
```
|
||||||
|
3. Save and exit.
|
||||||
|
4. Reload `systemd` and restart Ollama:
|
||||||
|
```bash
|
||||||
|
systemctl daemon-reload
|
||||||
|
systemctl restart ollama
|
||||||
|
```
|
||||||
|
|
||||||
|
### Setting Environment Variables on Windows
|
||||||
|
|
||||||
|
On Windows, Ollama inherits your user and system environment variables.
|
||||||
|
|
||||||
|
1. First, quit Ollama by clicking on it in the taskbar.
|
||||||
|
2. Edit system environment variables from the Control Panel.
|
||||||
|
3. Edit or create new variable(s) for your user account for `OLLAMA_HOST`, `OLLAMA_MODELS`, etc.
|
||||||
|
4. Click OK/Apply to save.
|
||||||
|
5. Run `ollama` from a new terminal window.
|
Loading…
Reference in New Issue
Block a user