Image inpainting tool powered by SOTA AI Model. Remove any unwanted object, defect, people from your pictures or erase and replace(powered by stable diffusion) any thing on your pictures.
Go to file
2022-03-24 16:31:29 +13:00
lama_cleaner Merge branch 'main' into main 2022-03-24 16:31:29 +13:00
.gitignore update readme 2022-02-10 10:08:45 +08:00
Dockerfile Added Dockerfile 2021-11-15 20:11:46 +01:00
LICENSE init 2021-11-15 22:21:01 +08:00
main.py Merge branch 'main' into main 2022-03-24 16:31:29 +13:00
README.md Merge branch 'main' into main 2022-03-24 16:31:29 +13:00
requirements.txt Added desktop application mode 2022-03-24 05:07:33 +13:00
setup.py init 2021-11-15 22:21:01 +08:00

Lama-cleaner: Image inpainting tool powered by SOTA AI model

https://user-images.githubusercontent.com/3998421/153323093-b664bb68-2928-480b-b59b-7c1ee24a4507.mp4

  • Support multiple model architectures
    1. LaMa
    2. LDM
  • High resolution support
  • Multi stroke support. Press and hold the cmd/ctrl key to enable multi stroke mode.
  • Zoom & Pan
  • Keep image EXIF data

Quick Start

Install requirements: pip3 install -r requirements.txt

Start server with LaMa model

python3 main.py --device=cuda --port=8080 --model=lama
  • --crop-trigger-size: If image size large then crop-trigger-size, crop each area from original image to do inference. Mainly for performance and memory reasons on very large image.Default is 2042,2042
  • --crop-margin: Margin around bounding box of painted stroke when crop mode triggered. Default is 256.

Start server with LDM model

python3 main.py --device=cuda --port=8080 --model=ldm --ldm-steps=50

--ldm-steps: The larger the value, the better the result, but it will be more time-consuming

Diffusion model is MUCH MORE slower than GANs(1080x720 image takes 8s on 3090), but it's possible to get better results than LaMa.

GUI

You can run lama-cleaner as a desktop application using the following command line arguments.

--gui: Launch lama-cleaner as a desktop application

--gui_size: Set the window size for the application. Usage: --gui_size 1200 900

Original Image LaMa LDM
photo-1583445095369-9c651e7e5d34 photo-1583445095369-9c651e7e5d34_cleanup_lama photo-1583445095369-9c651e7e5d34_cleanup_ldm

Blogs about diffusion models:

Development

Only needed if you plan to modify the frontend and recompile yourself.

Fronted

Frontend code are modified from cleanup.pictures, You can experience their great online services here.

  • Install dependencies:cd lama_cleaner/app/ && yarn
  • Start development server: yarn dev
  • Build: yarn build

Docker

Run within a Docker container. Set the CACHE_DIR to models location path. Optionally add a -d option to the docker run command below to run as a daemon.

Build Docker image

docker build -f Dockerfile -t lamacleaner .

Run Docker (cpu)

docker run -p 8080:8080 -e CACHE_DIR=/app/models -v  $(pwd)/models:/app/models -v $(pwd):/app --rm lamacleaner python3 main.py --device=cpu --port=8080

Run Docker (gpu)

docker run --gpus all -p 8080:8080 -e CACHE_DIR=/app/models -v $(pwd)/models:/app/models -v $(pwd):/app --rm lamacleaner python3 main.py --device=cuda --port=8080

Then open http://localhost:8080

Like My Work?

Sanster