diff --git a/server/storage/models/.gitignore b/server/storage/models/.gitignore index c3183aa4..55624667 100644 --- a/server/storage/models/.gitignore +++ b/server/storage/models/.gitignore @@ -1,2 +1,3 @@ Xenova -downloaded/* \ No newline at end of file +downloaded/* +!downloaded/.placeholder \ No newline at end of file diff --git a/server/storage/models/README.md b/server/storage/models/README.md index 8ed7ec0d..1a63484c 100644 --- a/server/storage/models/README.md +++ b/server/storage/models/README.md @@ -30,4 +30,8 @@ If you would like to use a local Llama compatible LLM model for chatting you can > If running in Docker you should be running the container to a mounted storage location on the host machine so you > can update the storage files directly without having to re-download or re-build your docker container. [See suggested Docker config](../../../README.md#recommended-usage-with-docker-easy) -All local models you want to have available for LLM selection should be placed in the `storage/models/downloaded` folder. Only `.gguf` files will be allowed to be selected from the UI. \ No newline at end of file +> [!NOTE] +> `/server/storage/models/downloaded` is the default location that your model files should be at. +> Your storage directory may differ if you changed the STORAGE_DIR environment variable. + +All local models you want to have available for LLM selection should be placed in the `server/storage/models/downloaded` folder. Only `.gguf` files will be allowed to be selected from the UI. \ No newline at end of file diff --git a/server/storage/models/downloaded/.placeholder b/server/storage/models/downloaded/.placeholder new file mode 100644 index 00000000..6121f697 --- /dev/null +++ b/server/storage/models/downloaded/.placeholder @@ -0,0 +1 @@ +All your .GGUF model file downloads you want to use for chatting should go into this folder. \ No newline at end of file