IOPaint/README.md
2022-03-23 10:15:23 +08:00

3.1 KiB

Lama-cleaner: Image inpainting tool powered by SOTA AI model

https://user-images.githubusercontent.com/3998421/153323093-b664bb68-2928-480b-b59b-7c1ee24a4507.mp4

  • Support multiple model architectures
    1. LaMa
    2. LDM
  • High resolution support
  • Multi stroke support. Press and hold the cmd/ctrl key to enable multi stroke mode.
  • Zoom & Pan
  • Keep image EXIF data

Quick Start

Install requirements: pip3 install -r requirements.txt

Start server with LaMa model

python3 main.py --device=cuda --port=8080 --model=lama
  • --crop-trigger-size: If image size large then crop-trigger-size, crop each area from original image to do inference. Mainly for performance and memory reasons on very large image.Default is 2042,2042
  • --crop-size: Crop size for --crop-trigger-size. Default is 512,512.

Start server with LDM model

python3 main.py --device=cuda --port=8080 --model=ldm --ldm-steps=50

--ldm-steps: The larger the value, the better the result, but it will be more time-consuming

Diffusion model is MUCH MORE slower than GANs(1080x720 image takes 8s on 3090), but it's possible to get better results than LaMa.

Original Image LaMa LDM
photo-1583445095369-9c651e7e5d34 photo-1583445095369-9c651e7e5d34_cleanup_lama photo-1583445095369-9c651e7e5d34_cleanup_ldm

Blogs about diffusion models:

Development

Only needed if you plan to modify the frontend and recompile yourself.

Fronted

Frontend code are modified from cleanup.pictures, You can experience their great online services here.

  • Install dependencies:cd lama_cleaner/app/ && yarn
  • Start development server: yarn dev
  • Build: yarn build

Docker

Run within a Docker container. Set the CACHE_DIR to models location path. Optionally add a -d option to the docker run command below to run as a daemon.

Build Docker image

docker build -f Dockerfile -t lamacleaner .

Run Docker (cpu)

docker run -p 8080:8080 -e CACHE_DIR=/app/models -v  $(pwd)/models:/app/models -v $(pwd):/app --rm lamacleaner python3 main.py --device=cpu --port=8080

Run Docker (gpu)

docker run --gpus all -p 8080:8080 -e CACHE_DIR=/app/models -v $(pwd)/models:/app/models -v $(pwd):/app --rm lamacleaner python3 main.py --device=cuda --port=8080

Then open http://localhost:8080

Like My Work?

Sanster