sd model no more need hf_access_token

This commit is contained in:
Qing 2023-01-08 21:13:36 +08:00
parent 51e0be2c96
commit 2d793c5fb4
3 changed files with 26 additions and 46 deletions

View File

@ -40,15 +40,16 @@ https://user-images.githubusercontent.com/3998421/196976498-ba1ad3ab-fa18-4c55-9
5. [FcF](https://github.com/SHI-Labs/FcF-Inpainting) 5. [FcF](https://github.com/SHI-Labs/FcF-Inpainting)
6. [SD1.5/SD2](https://github.com/runwayml/stable-diffusion) 6. [SD1.5/SD2](https://github.com/runwayml/stable-diffusion)
7. [Manga](https://github.com/msxie92/MangaInpainting) 7. [Manga](https://github.com/msxie92/MangaInpainting)
8. [Paint by Example](https://github.com/Fantasy-Studio/Paint-by-Example) 8. [Paint by Example](https://github.com/Fantasy-Studio/Paint-by-Example) [YouTube Demo](https://www.youtube.com/watch?v=NSAN3TzfhaI&ab_channel=PanicByte)
- Support CPU & GPU - Support CPU & GPU
- Various inpainting [strategy](#inpainting-strategy) - Various inpainting [strategy](#inpainting-strategy)
- Run as a desktop APP - Run as a desktop APP
- [Interactive Segmentation](https://github.com/Sanster/lama-cleaner/releases/tag/0.28.0) on any object - [Interactive Segmentation](https://github.com/Sanster/lama-cleaner/releases/tag/0.28.0) on any object. [YouTube Demo](https://www.youtube.com/watch?v=xHdo8a4Mn2g&ab_channel=PanicByte)
## Usage ## Usage
A great introductory [youtube video](https://www.youtube.com/watch?v=aYia7Jvbjno&ab_channel=Aitrepreneur) made by **Aitrepreneur** A great introductory [youtube video](https://www.youtube.com/watch?v=aYia7Jvbjno&ab_channel=Aitrepreneur) made by **
Aitrepreneur**
<details> <details>
<summary>1. Remove any unwanted things on the image</summary> <summary>1. Remove any unwanted things on the image</summary>
@ -101,31 +102,27 @@ lama-cleaner --model=lama --device=cpu --port=8080
# Lama Cleaner is now running at http://localhost:8080 # Lama Cleaner is now running at http://localhost:8080
``` ```
For stable-diffusion 1.5 model, you need
to [accepting the terms to access](https://huggingface.co/runwayml/stable-diffusion-inpainting), and
get an access token from here [huggingface access token](https://huggingface.co/docs/hub/security-tokens)
If you prefer to use docker, you can check out [docker](#docker) If you prefer to use docker, you can check out [docker](#docker)
If you hava no idea what is docker or pip, please check [One Click Installer](./scripts/README.md) If you hava no idea what is docker or pip, please check [One Click Installer](./scripts/README.md)
Available command line arguments: Available command line arguments:
| Name | Description | Default | | Name | Description | Default |
| -------------------- | ------------------------------------------------------------------------------------------------------------------- | -------- | | -------------------- | ------------------------------------------------------------------------------------------------------------- | -------- |
| --model | lama/ldm/zits/mat/fcf/sd1.5/manga/sd2/paint_by_example See details in [Inpaint Model](#inpainting-model) | lama | | --model | lama/ldm/zits/mat/fcf/sd1.5/manga/sd2/paint_by_example See details in [Inpaint Model](#inpainting-model) | lama |
| --hf_access_token | stable-diffusion need [huggingface access token](https://huggingface.co/docs/hub/security-tokens) to download model | | | --sd-disable-nsfw | Disable stable-diffusion NSFW checker. | |
| --sd-run-local | Once the model as downloaded, you can pass this arg and remove `--hf_access_token` | | | --sd-cpu-textencoder | Always run stable-diffusion TextEncoder model on CPU. | |
| --sd-disable-nsfw | Disable stable-diffusion NSFW checker. | | | --sd-enable-xformers | Enable xFormers optimizations. See: [facebookresearch/xformers](https://github.com/facebookresearch/xformers) | |
| --sd-cpu-textencoder | Always run stable-diffusion TextEncoder model on CPU. | | | --local-files-only | Once the model as downloaded, you can pass this arg to avoid diffusers connect to huggingface server | |
| --sd-enable-xformers | Enable xFormers optimizations. See: [facebookresearch/xformers](https://github.com/facebookresearch/xformers) | | | --cpu-offload | sd/paint_by_example model, offloads all models to CPU, sacrifice speed for reducing vRAM usage. |
| --no-half | Using full precision for sd/paint_by_exmaple model | | | --no-half | Using full precision for sd/paint_by_exmaple model | |
| --device | cuda / cpu / mps | cuda | | --device | cuda / cpu / mps | cuda |
| --port | Port for backend flask web server | 8080 | | --port | Port for backend flask web server | 8080 |
| --gui | Launch lama-cleaner as a desktop application | | | --gui | Launch lama-cleaner as a desktop application | |
| --gui_size | Set the window size for the application | 1200 900 | | --gui_size | Set the window size for the application | 1200 900 |
| --input | Path to image you want to load by default | None | | --input | Path to image you want to load by default | None |
| --debug | Enable debug mode for flask web server | | | --debug | Enable debug mode for flask web server | |
## Inpainting Model ## Inpainting Model

View File

@ -1,11 +1,9 @@
# Lama Cleaner One Click Installer # Lama Cleaner One Click Installer
## Model Description ## Model Description
- **lama**: State of the art image inpainting AI model, useful to remove any unwanted object, defect, people from your pictures. - **lama**: State of the art image inpainting AI model, useful to remove any unwanted object, defect, people from your pictures.
- **sd1.5**: Stable Diffusion model, text-driven image editing. To use this model you need to [accepting the terms to access](https://huggingface.co/runwayml/stable-diffusion-inpainting), and - **sd1.5**: Stable Diffusion model, text-driven image editing.
get an access token from here [huggingface access token](https://huggingface.co/docs/hub/security-tokens).
## Windows ## Windows
@ -14,7 +12,6 @@
1. Double click `win_config.bat`, follow the guide in the terminal to choice [model](#model-description) and set other configs. 1. Double click `win_config.bat`, follow the guide in the terminal to choice [model](#model-description) and set other configs.
1. Double click `win_start.bat` to start the server. 1. Double click `win_start.bat` to start the server.
## Q&A ## Q&A
**How to update the version?** **How to update the version?**
@ -24,6 +21,7 @@ Rerun `win_config.bat` will install the newest version of lama-cleaner
**Where is model downloaded?** **Where is model downloaded?**
By default, model will be downloaded to user folder By default, model will be downloaded to user folder
- stable diffusion model: `C:\Users\your_name\.cache\huggingface` - stable diffusion model: `C:\Users\your_name\.cache\huggingface`
- lama model: `C:\Users\your_name\.cache\torch` - lama model: `C:\Users\your_name\.cache\torch`
@ -36,4 +34,3 @@ set TORCH_HOME=your_directory
set HF_HOME=your_directory set HF_HOME=your_directory
@call invoke start @call invoke start
``` ```

View File

@ -31,6 +31,7 @@ CONFIG_PATH = "config.json"
class MODEL(str, Enum): class MODEL(str, Enum):
SD15 = "sd1.5" SD15 = "sd1.5"
LAMA = "lama" LAMA = "lama"
PAINT_BY_EXAMPLE = 'paint_by_example'
class DEVICE(str, Enum): class DEVICE(str, Enum):
@ -48,7 +49,7 @@ def info(c):
c.run("python --version") c.run("python --version")
c.run("which pip") c.run("which pip")
c.run("pip --version") c.run("pip --version")
c.run('pip list | grep "torch\|lama\|diffusers\|opencv\|cuda"') c.run('pip list | grep "torch\|lama\|diffusers\|opencv\|cuda\|xformers\|accelerate"')
except: except:
pass pass
print("-" * 60) print("-" * 60)
@ -56,23 +57,10 @@ def info(c):
@task(pre=[info]) @task(pre=[info])
def config(c, disable_device_choice=False): def config(c, disable_device_choice=False):
# TODO: 提示选择模型选择设备端口host
# 如果是 sd 模型,提示接受条款和输入 huggingface token
model = Prompt.ask( model = Prompt.ask(
"Choice model", choices=[MODEL.SD15, MODEL.LAMA], default=MODEL.SD15 "Choice model", choices=[MODEL.SD15, MODEL.LAMA, MODEL.PAINT_BY_EXAMPLE], default=MODEL.SD15
) )
hf_access_token = ""
if model == MODEL.SD15:
while True:
hf_access_token = Prompt.ask(
"Huggingface access token (https://huggingface.co/docs/hub/security-tokens)"
)
if hf_access_token == "":
log.warning("Access token is required to download model")
else:
break
if disable_device_choice: if disable_device_choice:
device = DEVICE.CPU device = DEVICE.CPU
else: else:
@ -93,7 +81,6 @@ def config(c, disable_device_choice=False):
configs = { configs = {
"model": model, "model": model,
"device": device, "device": device,
"hf_access_token": hf_access_token,
"desktop": desktop, "desktop": desktop,
} }
log.info(f"Save config to {CONFIG_PATH}") log.info(f"Save config to {CONFIG_PATH}")
@ -114,17 +101,16 @@ def start(c):
model = configs["model"] model = configs["model"]
device = configs["device"] device = configs["device"]
hf_access_token = configs["hf_access_token"]
desktop = configs["desktop"] desktop = configs["desktop"]
port = find_free_port() port = find_free_port()
log.info(f"Using random port: {port}") log.info(f"Using random port: {port}")
if desktop: if desktop:
c.run( c.run(
f"lama-cleaner --model {model} --device {device} --hf_access_token={hf_access_token} --port {port} --gui --gui-size 1400 900" f"lama-cleaner --model {model} --device {device} --port {port} --gui --gui-size 1400 900"
) )
else: else:
c.run( c.run(
f"lama-cleaner --model {model} --device {device} --hf_access_token={hf_access_token} --port {port}" f"lama-cleaner --model {model} --device {device} --port {port}"
) )