IOPaint/README.md

94 lines
4.2 KiB
Markdown
Raw Normal View History

2022-03-04 06:44:53 +01:00
# Lama-cleaner: Image inpainting tool powered by SOTA AI model
2021-11-15 08:22:34 +01:00
2022-04-20 15:13:35 +02:00
![downloads](https://img.shields.io/pypi/dm/lama-cleaner)
![version](https://img.shields.io/pypi/v/lama-cleaner)
2022-02-10 03:08:45 +01:00
https://user-images.githubusercontent.com/3998421/153323093-b664bb68-2928-480b-b59b-7c1ee24a4507.mp4
2022-03-04 06:44:53 +01:00
- [x] Support multiple model architectures
2022-03-24 14:36:47 +01:00
1. [LaMa](https://github.com/saic-mdal/lama)
1. [LDM](https://github.com/CompVis/latent-diffusion)
2022-05-23 15:28:39 +02:00
- [x] Support CPU & GPU
2021-12-12 07:54:37 +01:00
- [x] High resolution support
2022-03-27 07:55:27 +02:00
- [x] Run as a desktop APP
2021-12-12 07:54:37 +01:00
- [x] Multi stroke support. Press and hold the `cmd/ctrl` key to enable multi stroke mode.
2022-02-09 11:09:25 +01:00
- [x] Zoom & Pan
2021-12-12 07:54:37 +01:00
2022-04-18 16:40:23 +02:00
## Install
2021-11-15 08:22:34 +01:00
2022-04-18 16:40:23 +02:00
```bash
pip install lama-cleaner
2022-03-04 06:44:53 +01:00
2022-04-18 16:40:23 +02:00
lama-cleaner --device=cpu --port=8080
```
Available commands:
2022-03-04 06:44:53 +01:00
2022-04-18 16:18:48 +02:00
| Name | Description | Default |
| ---------- | ------------------------------------------------ | -------- |
| --model | lama or ldm. See details in **Model Comparison** | lama |
| --device | cuda or cpu | cuda |
| --gui | Launch lama-cleaner as a desktop application | |
| --gui_size | Set the window size for the application | 1200 900 |
| --input | Path to image you want to load by default | None |
| --port | Port for flask web server | 8080 |
| --debug | Enable debug mode for flask web server | |
2022-03-04 06:44:53 +01:00
2022-03-24 14:36:47 +01:00
## Model Comparison
2022-03-23 03:02:01 +01:00
2022-03-24 14:36:47 +01:00
Diffusion model(ldm) is **MUCH MORE** slower than GANs(lama)(1080x720 image takes 8s on 3090), but it's possible to get better
result, see below example:
2022-03-04 06:44:53 +01:00
2022-03-24 14:36:47 +01:00
| Original Image | LaMa | LDM |
| ----------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------- |
| ![photo-1583445095369-9c651e7e5d34](https://user-images.githubusercontent.com/3998421/156923525-d6afdec3-7b98-403f-ad20-88ebc6eb8d6d.jpg) | ![photo-1583445095369-9c651e7e5d34_cleanup_lama](https://user-images.githubusercontent.com/3998421/156923620-a40cc066-fd4a-4d85-a29f-6458711d1247.png) | ![photo-1583445095369-9c651e7e5d34_cleanup_ldm](https://user-images.githubusercontent.com/3998421/156923652-0d06c8c8-33ad-4a42-a717-9c99f3268933.png) |
2022-03-06 13:43:12 +01:00
2022-03-04 06:44:53 +01:00
Blogs about diffusion models:
- https://lilianweng.github.io/posts/2021-07-11-diffusion-models/
- https://yang-song.github.io/blog/2021/score/
2021-11-15 08:22:34 +01:00
## Development
2022-02-05 12:58:25 +01:00
Only needed if you plan to modify the frontend and recompile yourself.
2022-04-25 15:08:19 +02:00
### Frontend
2021-11-15 08:22:34 +01:00
2022-03-04 06:44:53 +01:00
Frontend code are modified from [cleanup.pictures](https://github.com/initml/cleanup.pictures), You can experience their
great online services [here](https://cleanup.pictures/).
2021-11-15 08:22:34 +01:00
- Install dependencies:`cd lama_cleaner/app/ && yarn`
2022-04-04 15:51:33 +02:00
- Start development server: `yarn start`
2021-11-15 08:22:34 +01:00
- Build: `yarn build`
2021-11-15 20:11:46 +01:00
2021-11-15 23:41:59 +01:00
## Docker
2021-11-16 14:21:41 +01:00
2022-03-04 06:44:53 +01:00
Run within a Docker container. Set the `CACHE_DIR` to models location path. Optionally add a `-d` option to
the `docker run` command below to run as a daemon.
2021-11-15 23:41:59 +01:00
2021-11-15 23:51:27 +01:00
### Build Docker image
2021-11-16 14:21:41 +01:00
2021-11-15 20:11:46 +01:00
```
docker build -f Dockerfile -t lamacleaner .
2021-11-15 23:51:27 +01:00
```
### Run Docker (cpu)
2021-11-16 14:21:41 +01:00
2021-11-15 23:51:27 +01:00
```
2021-11-16 14:21:41 +01:00
docker run -p 8080:8080 -e CACHE_DIR=/app/models -v $(pwd)/models:/app/models -v $(pwd):/app --rm lamacleaner python3 main.py --device=cpu --port=8080
2021-11-15 20:11:46 +01:00
```
2021-11-15 23:51:27 +01:00
### Run Docker (gpu)
2021-11-16 14:21:41 +01:00
2021-11-15 20:11:46 +01:00
```
2021-11-16 14:21:41 +01:00
docker run --gpus all -p 8080:8080 -e CACHE_DIR=/app/models -v $(pwd)/models:/app/models -v $(pwd):/app --rm lamacleaner python3 main.py --device=cuda --port=8080
2021-11-15 20:11:46 +01:00
```
2021-11-16 14:21:41 +01:00
Then open [http://localhost:8080](http://localhost:8080)
2022-02-17 02:29:04 +01:00
## Like My Work?
2022-03-04 06:44:53 +01:00
2022-02-17 02:29:04 +01:00
<a href="https://www.buymeacoffee.com/Sanster">
<img height="50em" src="https://cdn.buymeacoffee.com/buttons/v2/default-blue.png" alt="Sanster" />
</a>