IOPaint/README.md

40 lines
1.3 KiB
Markdown
Raw Normal View History

2021-11-15 08:22:34 +01:00
# Lama-cleaner: Image inpainting tool powered by [LaMa](https://github.com/saic-mdal/lama)
This project is mainly used for selfhosting LaMa model, some interaction improvements may be added later.
## Quick Start
- Install requirements: `pip3 install -r requirements.txt`
- Start server: `python3 main.py --device=cuda --port=8080`
## Development
### Fronted
Frontend code are modified from [cleanup.pictures](https://github.com/initml/cleanup.pictures),
You can experience their great online services [here](https://cleanup.pictures/).
- Install dependencies:`cd lama_cleaner/app/ && yarn`
- Start development server: `yarn dev`
- Build: `yarn build`
2021-11-15 20:11:46 +01:00
2021-11-15 23:41:59 +01:00
## Docker
Run within a Docker container. Set the `cache_dir` to models location path.
Optionally add a `-d` option to the `docker run` command below to run as a daemon.
2021-11-15 23:51:27 +01:00
### Build Docker image
2021-11-15 20:11:46 +01:00
```
docker build -f Dockerfile -t lamacleaner .
2021-11-15 23:51:27 +01:00
```
### Run Docker (cpu)
```
2021-11-15 20:11:46 +01:00
docker run -p 8080:8080 -e cache_dir=/app/models -v models:/app/models -v $(pwd):/app --rm lamacleaner python3 main.py --device=cpu --port=8080
```
2021-11-15 23:51:27 +01:00
### Run Docker (gpu)
2021-11-15 20:11:46 +01:00
```
2021-11-15 23:24:07 +01:00
docker run --gpus all -p 8080:8080 -e cache_dir=/app/models -v models:/app/models -v $(pwd):/app --rm lamacleaner python3 main.py --device=cuda --port=8080
2021-11-15 20:11:46 +01:00
```
Then open [http://localhost:8080](http://localhost:8080)