lama_cleaner | ||
.gitignore | ||
Dockerfile | ||
LICENSE | ||
main.py | ||
README.md | ||
requirements.txt | ||
setup.py |
Lama-cleaner: Image inpainting tool powered by SOTA AI model
https://user-images.githubusercontent.com/3998421/153323093-b664bb68-2928-480b-b59b-7c1ee24a4507.mp4
- Support multiple model architectures
- High resolution support
- Multi stroke support. Press and hold the
cmd/ctrl
key to enable multi stroke mode. - Zoom & Pan
- Keep image EXIF data
Quick Start
Install requirements: pip3 install -r requirements.txt
Start server with LaMa model
python3 main.py --device=cuda --port=8080 --model=lama
--crop-trigger-size
: If image size large then crop-trigger-size, crop each area from original image to do inference. Mainly for performance and memory reasons on very large image.Default is 2042,2042--crop-margin
: Margin around bounding box of painted stroke when crop mode triggered. Default is 256.
Start server with LDM model
python3 main.py --device=cuda --port=8080 --model=ldm --ldm-steps=50
--ldm-steps
: The larger the value, the better the result, but it will be more time-consuming
Diffusion model is MUCH MORE slower than GANs(1080x720 image takes 8s on 3090), but it's possible to get better results than LaMa.
Original Image | LaMa | LDM |
---|---|---|
Blogs about diffusion models:
- https://lilianweng.github.io/posts/2021-07-11-diffusion-models/
- https://yang-song.github.io/blog/2021/score/
Development
Only needed if you plan to modify the frontend and recompile yourself.
Fronted
Frontend code are modified from cleanup.pictures, You can experience their great online services here.
- Install dependencies:
cd lama_cleaner/app/ && yarn
- Start development server:
yarn dev
- Build:
yarn build
Docker
Run within a Docker container. Set the CACHE_DIR
to models location path. Optionally add a -d
option to
the docker run
command below to run as a daemon.
Build Docker image
docker build -f Dockerfile -t lamacleaner .
Run Docker (cpu)
docker run -p 8080:8080 -e CACHE_DIR=/app/models -v $(pwd)/models:/app/models -v $(pwd):/app --rm lamacleaner python3 main.py --device=cpu --port=8080
Run Docker (gpu)
docker run --gpus all -p 8080:8080 -e CACHE_DIR=/app/models -v $(pwd)/models:/app/models -v $(pwd):/app --rm lamacleaner python3 main.py --device=cuda --port=8080
Then open http://localhost:8080