anything-llm/docker
Timothy Carambat 655ebd9479
[Feature] AnythingLLM use locally hosted Llama.cpp and GGUF files for inferencing (#413)
* Implement use of native embedder (all-Mini-L6-v2)
stop showing prisma queries during dev

* Add native embedder as an available embedder selection

* wrap model loader in try/catch

* print progress on download

* add built-in LLM support (expiermental)

* Update to progress output for embedder

* move embedder selection options to component

* saftey checks for modelfile

* update ref

* Hide selection when on hosted subdomain

* update documentation
hide localLlama when on hosted

* saftey checks for storage of models

* update dockerfile to pre-build Llama.cpp bindings

* update lockfile

* add langchain doc comment

* remove extraneous --no-metal option

* Show data handling for private LLM

* persist model in memory for N+1 chats

* update import
update dev comment on token model size

* update primary README

* chore: more readme updates and remove screenshots - too much to maintain, just use the app!

* remove screeshot link
2023-12-07 14:48:27 -08:00
..
.env.example chore: remove unused NO_DEBUG env 2023-12-07 14:14:30 -08:00
docker-compose.yml Adding url uploads to document picker (#375) 2023-11-16 17:15:01 -08:00
docker-entrypoint.sh Aws docker fixes (#309) 2023-10-29 11:03:41 -07:00
docker-healthcheck.sh Docker support (#34) 2023-06-13 11:26:11 -07:00
Dockerfile [Feature] AnythingLLM use locally hosted Llama.cpp and GGUF files for inferencing (#413) 2023-12-07 14:48:27 -08:00
HOW_TO_USE_DOCKER.md Documentation update 2023-12-06 11:38:40 -08:00