Image inpainting tool powered by SOTA AI Model. Remove any unwanted object, defect, people from your pictures or erase and replace(powered by stable diffusion) any thing on your pictures.
lama_cleaner | ||
.gitignore | ||
Dockerfile | ||
LICENSE | ||
main.py | ||
README.md | ||
requirements.txt | ||
setup.py |
Lama-cleaner: Image inpainting tool powered by SOTA AI model
https://user-images.githubusercontent.com/3998421/153323093-b664bb68-2928-480b-b59b-7c1ee24a4507.mp4
- Support multiple model architectures
- High resolution support
- Run as a desktop APP
- Multi stroke support. Press and hold the
cmd/ctrl
key to enable multi stroke mode. - Zoom & Pan
- Keep image EXIF data
Quick Start
- Install requirements:
pip3 install -r requirements.txt
- Start server:
python3 main.py
, open http://localhost:8080
Available commands for main.py
Name | Description | Default |
---|---|---|
--model | lama or ldm. See details in Model Comparison | lama |
--device | cuda or cpu | cuda |
--ldm-steps | The larger the value, the better the result, but it will be more time-consuming | 50 |
--crop-trigger-size | If image size large then crop-trigger-size, crop each area from original image to do inference. Mainly for performance and memory reasons on very large image. | 2042,2042 |
--crop-margin | Margin around bounding box of painted stroke when crop mode triggered. | 256 |
--gui | Launch lama-cleaner as a desktop application | |
--gui_size | Set the window size for the application | 1200 900 |
--input | Path to image you want to load by default | None |
--port | Port for flask web server | 8080 |
--debug | Enable debug mode for flask web server |
Model Comparison
Diffusion model(ldm) is MUCH MORE slower than GANs(lama)(1080x720 image takes 8s on 3090), but it's possible to get better result, see below example:
Original Image | LaMa | LDM |
---|---|---|
Blogs about diffusion models:
- https://lilianweng.github.io/posts/2021-07-11-diffusion-models/
- https://yang-song.github.io/blog/2021/score/
Development
Only needed if you plan to modify the frontend and recompile yourself.
Fronted
Frontend code are modified from cleanup.pictures, You can experience their great online services here.
- Install dependencies:
cd lama_cleaner/app/ && yarn
- Start development server:
yarn start
- Build:
yarn build
Docker
Run within a Docker container. Set the CACHE_DIR
to models location path. Optionally add a -d
option to
the docker run
command below to run as a daemon.
Build Docker image
docker build -f Dockerfile -t lamacleaner .
Run Docker (cpu)
docker run -p 8080:8080 -e CACHE_DIR=/app/models -v $(pwd)/models:/app/models -v $(pwd):/app --rm lamacleaner python3 main.py --device=cpu --port=8080
Run Docker (gpu)
docker run --gpus all -p 8080:8080 -e CACHE_DIR=/app/models -v $(pwd)/models:/app/models -v $(pwd):/app --rm lamacleaner python3 main.py --device=cuda --port=8080
Then open http://localhost:8080