Image inpainting tool powered by SOTA AI Model. Remove any unwanted object, defect, people from your pictures or erase and replace(powered by stable diffusion) any thing on your pictures.
Go to file
2022-03-31 06:09:49 +13:00
lama_cleaner Inpainted State Fix 2022-03-30 19:14:32 +13:00
.gitignore update README 2022-03-24 21:36:47 +08:00
Dockerfile Added Dockerfile 2021-11-15 20:11:46 +01:00
LICENSE init 2021-11-15 22:21:01 +08:00
main.py check --input before start server 2022-03-27 13:37:26 +08:00
README.md update README 2022-03-27 13:55:27 +08:00
requirements.txt Revert "add imghdr to requirements.txt" 2022-03-30 10:21:35 +08:00
setup.py init 2021-11-15 22:21:01 +08:00

Lama-cleaner: Image inpainting tool powered by SOTA AI model

https://user-images.githubusercontent.com/3998421/153323093-b664bb68-2928-480b-b59b-7c1ee24a4507.mp4

  • Support multiple model architectures
    1. LaMa
    2. LDM
  • High resolution support
  • Run as a desktop APP
  • Multi stroke support. Press and hold the cmd/ctrl key to enable multi stroke mode.
  • Zoom & Pan
  • Keep image EXIF data

Quick Start

  1. Install requirements: pip3 install -r requirements.txt
  2. Start server: python3 main.py, open http://localhost:8080

Available commands for main.py

Name Description Default
--model lama or ldm. See details in Model Comparison lama
--device cuda or cpu cuda
--ldm-steps The larger the value, the better the result, but it will be more time-consuming 50
--crop-trigger-size If image size large then crop-trigger-size, crop each area from original image to do inference. Mainly for performance and memory reasons on very large image. 2042,2042
--crop-margin Margin around bounding box of painted stroke when crop mode triggered. 256
--gui Launch lama-cleaner as a desktop application
--gui_size Set the window size for the application 1200 900
--input Path to image you want to load by default None
--port Port for flask web server 8080
--debug Enable debug mode for flask web server

Model Comparison

Diffusion model(ldm) is MUCH MORE slower than GANs(lama)(1080x720 image takes 8s on 3090), but it's possible to get better result, see below example:

Original Image LaMa LDM
photo-1583445095369-9c651e7e5d34 photo-1583445095369-9c651e7e5d34_cleanup_lama photo-1583445095369-9c651e7e5d34_cleanup_ldm

Blogs about diffusion models:

Development

Only needed if you plan to modify the frontend and recompile yourself.

Fronted

Frontend code are modified from cleanup.pictures, You can experience their great online services here.

  • Install dependencies:cd lama_cleaner/app/ && yarn
  • Start development server: yarn dev
  • Build: yarn build

Docker

Run within a Docker container. Set the CACHE_DIR to models location path. Optionally add a -d option to the docker run command below to run as a daemon.

Build Docker image

docker build -f Dockerfile -t lamacleaner .

Run Docker (cpu)

docker run -p 8080:8080 -e CACHE_DIR=/app/models -v  $(pwd)/models:/app/models -v $(pwd):/app --rm lamacleaner python3 main.py --device=cpu --port=8080

Run Docker (gpu)

docker run --gpus all -p 8080:8080 -e CACHE_DIR=/app/models -v $(pwd)/models:/app/models -v $(pwd):/app --rm lamacleaner python3 main.py --device=cuda --port=8080

Then open http://localhost:8080

Like My Work?

Sanster