sd model no more need hf_access_token
This commit is contained in:
parent
51e0be2c96
commit
2d793c5fb4
41
README.md
41
README.md
@ -40,15 +40,16 @@ https://user-images.githubusercontent.com/3998421/196976498-ba1ad3ab-fa18-4c55-9
|
||||
5. [FcF](https://github.com/SHI-Labs/FcF-Inpainting)
|
||||
6. [SD1.5/SD2](https://github.com/runwayml/stable-diffusion)
|
||||
7. [Manga](https://github.com/msxie92/MangaInpainting)
|
||||
8. [Paint by Example](https://github.com/Fantasy-Studio/Paint-by-Example)
|
||||
8. [Paint by Example](https://github.com/Fantasy-Studio/Paint-by-Example) [YouTube Demo](https://www.youtube.com/watch?v=NSAN3TzfhaI&ab_channel=PanicByte)
|
||||
- Support CPU & GPU
|
||||
- Various inpainting [strategy](#inpainting-strategy)
|
||||
- Run as a desktop APP
|
||||
- [Interactive Segmentation](https://github.com/Sanster/lama-cleaner/releases/tag/0.28.0) on any object
|
||||
- [Interactive Segmentation](https://github.com/Sanster/lama-cleaner/releases/tag/0.28.0) on any object. [YouTube Demo](https://www.youtube.com/watch?v=xHdo8a4Mn2g&ab_channel=PanicByte)
|
||||
|
||||
## Usage
|
||||
|
||||
A great introductory [youtube video](https://www.youtube.com/watch?v=aYia7Jvbjno&ab_channel=Aitrepreneur) made by **Aitrepreneur**
|
||||
A great introductory [youtube video](https://www.youtube.com/watch?v=aYia7Jvbjno&ab_channel=Aitrepreneur) made by **
|
||||
Aitrepreneur**
|
||||
|
||||
<details>
|
||||
<summary>1. Remove any unwanted things on the image</summary>
|
||||
@ -101,31 +102,27 @@ lama-cleaner --model=lama --device=cpu --port=8080
|
||||
# Lama Cleaner is now running at http://localhost:8080
|
||||
```
|
||||
|
||||
For stable-diffusion 1.5 model, you need
|
||||
to [accepting the terms to access](https://huggingface.co/runwayml/stable-diffusion-inpainting), and
|
||||
get an access token from here [huggingface access token](https://huggingface.co/docs/hub/security-tokens)
|
||||
|
||||
If you prefer to use docker, you can check out [docker](#docker)
|
||||
|
||||
If you hava no idea what is docker or pip, please check [One Click Installer](./scripts/README.md)
|
||||
|
||||
Available command line arguments:
|
||||
|
||||
| Name | Description | Default |
|
||||
| -------------------- | ------------------------------------------------------------------------------------------------------------------- | -------- |
|
||||
| --model | lama/ldm/zits/mat/fcf/sd1.5/manga/sd2/paint_by_example See details in [Inpaint Model](#inpainting-model) | lama |
|
||||
| --hf_access_token | stable-diffusion need [huggingface access token](https://huggingface.co/docs/hub/security-tokens) to download model | |
|
||||
| --sd-run-local | Once the model as downloaded, you can pass this arg and remove `--hf_access_token` | |
|
||||
| --sd-disable-nsfw | Disable stable-diffusion NSFW checker. | |
|
||||
| --sd-cpu-textencoder | Always run stable-diffusion TextEncoder model on CPU. | |
|
||||
| --sd-enable-xformers | Enable xFormers optimizations. See: [facebookresearch/xformers](https://github.com/facebookresearch/xformers) | |
|
||||
| --no-half | Using full precision for sd/paint_by_exmaple model | |
|
||||
| --device | cuda / cpu / mps | cuda |
|
||||
| --port | Port for backend flask web server | 8080 |
|
||||
| --gui | Launch lama-cleaner as a desktop application | |
|
||||
| --gui_size | Set the window size for the application | 1200 900 |
|
||||
| --input | Path to image you want to load by default | None |
|
||||
| --debug | Enable debug mode for flask web server | |
|
||||
| Name | Description | Default |
|
||||
| -------------------- | ------------------------------------------------------------------------------------------------------------- | -------- |
|
||||
| --model | lama/ldm/zits/mat/fcf/sd1.5/manga/sd2/paint_by_example See details in [Inpaint Model](#inpainting-model) | lama |
|
||||
| --sd-disable-nsfw | Disable stable-diffusion NSFW checker. | |
|
||||
| --sd-cpu-textencoder | Always run stable-diffusion TextEncoder model on CPU. | |
|
||||
| --sd-enable-xformers | Enable xFormers optimizations. See: [facebookresearch/xformers](https://github.com/facebookresearch/xformers) | |
|
||||
| --local-files-only | Once the model as downloaded, you can pass this arg to avoid diffusers connect to huggingface server | |
|
||||
| --cpu-offload | sd/paint_by_example model, offloads all models to CPU, sacrifice speed for reducing vRAM usage. |
|
||||
| --no-half | Using full precision for sd/paint_by_exmaple model | |
|
||||
| --device | cuda / cpu / mps | cuda |
|
||||
| --port | Port for backend flask web server | 8080 |
|
||||
| --gui | Launch lama-cleaner as a desktop application | |
|
||||
| --gui_size | Set the window size for the application | 1200 900 |
|
||||
| --input | Path to image you want to load by default | None |
|
||||
| --debug | Enable debug mode for flask web server | |
|
||||
|
||||
## Inpainting Model
|
||||
|
||||
|
@ -1,11 +1,9 @@
|
||||
# Lama Cleaner One Click Installer
|
||||
|
||||
|
||||
## Model Description
|
||||
|
||||
- **lama**: State of the art image inpainting AI model, useful to remove any unwanted object, defect, people from your pictures.
|
||||
- **sd1.5**: Stable Diffusion model, text-driven image editing. To use this model you need to [accepting the terms to access](https://huggingface.co/runwayml/stable-diffusion-inpainting), and
|
||||
get an access token from here [huggingface access token](https://huggingface.co/docs/hub/security-tokens).
|
||||
- **sd1.5**: Stable Diffusion model, text-driven image editing.
|
||||
|
||||
## Windows
|
||||
|
||||
@ -14,7 +12,6 @@
|
||||
1. Double click `win_config.bat`, follow the guide in the terminal to choice [model](#model-description) and set other configs.
|
||||
1. Double click `win_start.bat` to start the server.
|
||||
|
||||
|
||||
## Q&A
|
||||
|
||||
**How to update the version?**
|
||||
@ -24,6 +21,7 @@ Rerun `win_config.bat` will install the newest version of lama-cleaner
|
||||
**Where is model downloaded?**
|
||||
|
||||
By default, model will be downloaded to user folder
|
||||
|
||||
- stable diffusion model: `C:\Users\your_name\.cache\huggingface`
|
||||
- lama model: `C:\Users\your_name\.cache\torch`
|
||||
|
||||
@ -36,4 +34,3 @@ set TORCH_HOME=your_directory
|
||||
set HF_HOME=your_directory
|
||||
@call invoke start
|
||||
```
|
||||
|
||||
|
@ -31,6 +31,7 @@ CONFIG_PATH = "config.json"
|
||||
class MODEL(str, Enum):
|
||||
SD15 = "sd1.5"
|
||||
LAMA = "lama"
|
||||
PAINT_BY_EXAMPLE = 'paint_by_example'
|
||||
|
||||
|
||||
class DEVICE(str, Enum):
|
||||
@ -48,7 +49,7 @@ def info(c):
|
||||
c.run("python --version")
|
||||
c.run("which pip")
|
||||
c.run("pip --version")
|
||||
c.run('pip list | grep "torch\|lama\|diffusers\|opencv\|cuda"')
|
||||
c.run('pip list | grep "torch\|lama\|diffusers\|opencv\|cuda\|xformers\|accelerate"')
|
||||
except:
|
||||
pass
|
||||
print("-" * 60)
|
||||
@ -56,23 +57,10 @@ def info(c):
|
||||
|
||||
@task(pre=[info])
|
||||
def config(c, disable_device_choice=False):
|
||||
# TODO: 提示选择模型,选择设备,端口,host
|
||||
# 如果是 sd 模型,提示接受条款和输入 huggingface token
|
||||
model = Prompt.ask(
|
||||
"Choice model", choices=[MODEL.SD15, MODEL.LAMA], default=MODEL.SD15
|
||||
"Choice model", choices=[MODEL.SD15, MODEL.LAMA, MODEL.PAINT_BY_EXAMPLE], default=MODEL.SD15
|
||||
)
|
||||
|
||||
hf_access_token = ""
|
||||
if model == MODEL.SD15:
|
||||
while True:
|
||||
hf_access_token = Prompt.ask(
|
||||
"Huggingface access token (https://huggingface.co/docs/hub/security-tokens)"
|
||||
)
|
||||
if hf_access_token == "":
|
||||
log.warning("Access token is required to download model")
|
||||
else:
|
||||
break
|
||||
|
||||
if disable_device_choice:
|
||||
device = DEVICE.CPU
|
||||
else:
|
||||
@ -93,7 +81,6 @@ def config(c, disable_device_choice=False):
|
||||
configs = {
|
||||
"model": model,
|
||||
"device": device,
|
||||
"hf_access_token": hf_access_token,
|
||||
"desktop": desktop,
|
||||
}
|
||||
log.info(f"Save config to {CONFIG_PATH}")
|
||||
@ -114,17 +101,16 @@ def start(c):
|
||||
|
||||
model = configs["model"]
|
||||
device = configs["device"]
|
||||
hf_access_token = configs["hf_access_token"]
|
||||
desktop = configs["desktop"]
|
||||
port = find_free_port()
|
||||
log.info(f"Using random port: {port}")
|
||||
|
||||
if desktop:
|
||||
c.run(
|
||||
f"lama-cleaner --model {model} --device {device} --hf_access_token={hf_access_token} --port {port} --gui --gui-size 1400 900"
|
||||
f"lama-cleaner --model {model} --device {device} --port {port} --gui --gui-size 1400 900"
|
||||
)
|
||||
else:
|
||||
c.run(
|
||||
f"lama-cleaner --model {model} --device {device} --hf_access_token={hf_access_token} --port {port}"
|
||||
f"lama-cleaner --model {model} --device {device} --port {port}"
|
||||
)
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user