Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Contribute to GitLab
Sign in / Register
Toggle navigation
S
stable-diffusion-webui
Project
Project
Details
Activity
Cycle Analytics
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Charts
Issues
0
Issues
0
List
Board
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Charts
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Charts
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
Administrator
stable-diffusion-webui
Commits
3324f31e
Commit
3324f31e
authored
Aug 22, 2022
by
AUTOMATIC
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
first
parent
71cf18b0
Show whitespace changes
Inline
Side-by-side
Showing
3 changed files
with
462 additions
and
2 deletions
+462
-2
README.md
README.md
+58
-2
screenshot.png
screenshot.png
+0
-0
webui.py
webui.py
+404
-0
No files found.
README.md
View file @
3324f31e
# stable-diffusion-webui
# Stable Diffusion web UI
Stable Diffusion web UI
A browser interface based on Gradio library for Stable Diffusion.
Original script with Gradio UI was written by a kind anonymopus user. This is a modification.

## Stable Diffusion
This script assumes that you already have main Stable Diffusion sutff installed, assumed to be in directory
`/sd`
.
If you don't have it installed, follow the guide:
-
https://rentry.org/kretard
This repository's
`webgui.py`
is a replacement for
`kdiff.py`
from the guide.
Particularly, following files must exist:
-
`/sd/configs/stable-diffusion/v1-inference.yaml`
-
`/sd/models/ldm/stable-diffusion-v1/model.ckpt`
-
`/sd/ldm/util.py`
-
`/sd/k_diffusion/__init__.py`
## GFPGAN
If you want to use GFPGAN to improve generated faces, you need to install it separately.
Follow instructions from https://github.com/TencentARC/GFPGAN, but when cloning it, do so into Stable Diffusion main directory,
`/sd`
.
After that download
[
GFPGANv1.3.pth
](
https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth
)
and put it
into the
`/sd/GFPGAN/experiments/pretrained_models`
directory. If you're getting troubles with GFPGAN support, follow instructions
from the GFPGAN's repository until
`inference_gfpgan.py`
script works.
The following files must exist:
-
`/sd/GFPGAN/inference_gfpgan.py`
-
`/sd/GFPGAN/experiments/pretrained_models/GFPGANv1.3.pth`
If the GFPGAN directory does not exist, you will not get the option to use GFPGAN in the UI. If it does exist, you will either be able
to use it, or there will be a message in console with an error related to GFPGAN.
## Web UI
Run the script as:
`python webui.py`
When running the script, you must be in the main Stable Diffusion directory,
`/sd`
. If you cloned this repository into a subdirectory
of
`/sd`
, say, the
`stable-diffusion-webui`
directory, you will run it as:
`python stable-diffusion-webui/webui.py`
When launching, you may get a very long warning message related to some weights not being used. You may freely ignore it.
After a while, you will get a message like this:
```
Running on local URL: http://127.0.0.1:7860/
```
Open the URL in browser, and you are good to go.
screenshot.png
0 → 100644
View file @
3324f31e
865 KB
webui.py
0 → 100644
View file @
3324f31e
import
PIL
import
argparse
,
os
,
sys
,
glob
import
torch
import
torch.nn
as
nn
import
numpy
as
np
import
gradio
as
gr
from
omegaconf
import
OmegaConf
from
PIL
import
Image
from
itertools
import
islice
from
einops
import
rearrange
,
repeat
from
torchvision.utils
import
make_grid
from
torch
import
autocast
from
contextlib
import
contextmanager
,
nullcontext
import
mimetypes
import
random
import
k_diffusion
as
K
from
ldm.util
import
instantiate_from_config
from
ldm.models.diffusion.ddim
import
DDIMSampler
from
ldm.models.diffusion.plms
import
PLMSSampler
# this is a fix for Windows users. Without it, javascript files will be served with text/html content-type and the bowser will not show any UI
mimetypes
.
init
()
mimetypes
.
add_type
(
'application/javascript'
,
'.js'
)
# some of those options should not be changed at all because they would break the model, so I removed them from options.
opt_C
=
4
opt_f
=
8
parser
=
argparse
.
ArgumentParser
()
parser
.
add_argument
(
"--outdir"
,
type
=
str
,
nargs
=
"?"
,
help
=
"dir to write results to"
,
default
=
None
)
parser
.
add_argument
(
"--skip_grid"
,
action
=
'store_true'
,
help
=
"do not save a grid, only individual samples. Helpful when evaluating lots of samples"
,)
parser
.
add_argument
(
"--skip_save"
,
action
=
'store_true'
,
help
=
"do not save indiviual samples. For speed measurements."
,)
parser
.
add_argument
(
"--n_rows"
,
type
=
int
,
default
=
0
,
help
=
"rows in the grid (default: n_samples)"
,)
parser
.
add_argument
(
"--config"
,
type
=
str
,
default
=
"configs/stable-diffusion/v1-inference.yaml"
,
help
=
"path to config which constructs model"
,)
parser
.
add_argument
(
"--ckpt"
,
type
=
str
,
default
=
"models/ldm/stable-diffusion-v1/model.ckpt"
,
help
=
"path to checkpoint of model"
,)
parser
.
add_argument
(
"--precision"
,
type
=
str
,
help
=
"evaluate at this precision"
,
choices
=
[
"full"
,
"autocast"
],
default
=
"autocast"
)
parser
.
add_argument
(
"--gfpgan-dir"
,
type
=
str
,
help
=
"GFPGAN directory"
,
default
=
'./GFPGAN'
)
opt
=
parser
.
parse_args
()
GFPGAN_dir
=
opt
.
gfpgan_dir
def
chunk
(
it
,
size
):
it
=
iter
(
it
)
return
iter
(
lambda
:
tuple
(
islice
(
it
,
size
)),
())
def
load_model_from_config
(
config
,
ckpt
,
verbose
=
False
):
print
(
f
"Loading model from {ckpt}"
)
pl_sd
=
torch
.
load
(
ckpt
,
map_location
=
"cpu"
)
if
"global_step"
in
pl_sd
:
print
(
f
"Global Step: {pl_sd['global_step']}"
)
sd
=
pl_sd
[
"state_dict"
]
model
=
instantiate_from_config
(
config
.
model
)
m
,
u
=
model
.
load_state_dict
(
sd
,
strict
=
False
)
if
len
(
m
)
>
0
and
verbose
:
print
(
"missing keys:"
)
print
(
m
)
if
len
(
u
)
>
0
and
verbose
:
print
(
"unexpected keys:"
)
print
(
u
)
model
.
cuda
()
model
.
eval
()
return
model
def
load_img_pil
(
img_pil
):
image
=
img_pil
.
convert
(
"RGB"
)
w
,
h
=
image
.
size
print
(
f
"loaded input image of size ({w}, {h})"
)
w
,
h
=
map
(
lambda
x
:
x
-
x
%
64
,
(
w
,
h
))
# resize to integer multiple of 64
image
=
image
.
resize
((
w
,
h
),
resample
=
PIL
.
Image
.
LANCZOS
)
print
(
f
"cropped image to size ({w}, {h})"
)
image
=
np
.
array
(
image
)
.
astype
(
np
.
float32
)
/
255.0
image
=
image
[
None
]
.
transpose
(
0
,
3
,
1
,
2
)
image
=
torch
.
from_numpy
(
image
)
return
2.
*
image
-
1.
def
load_img
(
path
):
return
load_img_pil
(
Image
.
open
(
path
))
class
CFGDenoiser
(
nn
.
Module
):
def
__init__
(
self
,
model
):
super
()
.
__init__
()
self
.
inner_model
=
model
def
forward
(
self
,
x
,
sigma
,
uncond
,
cond
,
cond_scale
):
x_in
=
torch
.
cat
([
x
]
*
2
)
sigma_in
=
torch
.
cat
([
sigma
]
*
2
)
cond_in
=
torch
.
cat
([
uncond
,
cond
])
uncond
,
cond
=
self
.
inner_model
(
x_in
,
sigma_in
,
cond
=
cond_in
)
.
chunk
(
2
)
return
uncond
+
(
cond
-
uncond
)
*
cond_scale
def
load_GFPGAN
():
model_name
=
'GFPGANv1.3'
model_path
=
os
.
path
.
join
(
GFPGAN_dir
,
'experiments/pretrained_models'
,
model_name
+
'.pth'
)
if
not
os
.
path
.
isfile
(
model_path
):
raise
Exception
(
"GFPGAN model not found at path "
+
model_path
)
sys
.
path
.
append
(
os
.
path
.
abspath
(
GFPGAN_dir
))
from
gfpgan
import
GFPGANer
return
GFPGANer
(
model_path
=
model_path
,
upscale
=
1
,
arch
=
'clean'
,
channel_multiplier
=
2
,
bg_upsampler
=
None
)
GFPGAN
=
None
if
os
.
path
.
exists
(
GFPGAN_dir
):
try
:
GFPGAN
=
load_GFPGAN
()
print
(
"Loaded GFPGAN"
)
except
Exception
:
import
traceback
print
(
"Error loading GFPGAN:"
,
file
=
sys
.
stderr
)
print
(
traceback
.
format_exc
(),
file
=
sys
.
stderr
)
config
=
OmegaConf
.
load
(
"configs/stable-diffusion/v1-inference.yaml"
)
model
=
load_model_from_config
(
config
,
"models/ldm/stable-diffusion-v1/model.ckpt"
)
device
=
torch
.
device
(
"cuda"
)
if
torch
.
cuda
.
is_available
()
else
torch
.
device
(
"cpu"
)
model
=
model
.
half
()
.
to
(
device
)
def
image_grid
(
imgs
,
rows
):
cols
=
len
(
imgs
)
//
rows
w
,
h
=
imgs
[
0
]
.
size
grid
=
Image
.
new
(
'RGB'
,
size
=
(
cols
*
w
,
rows
*
h
))
for
i
,
img
in
enumerate
(
imgs
):
grid
.
paste
(
img
,
box
=
(
i
%
cols
*
w
,
i
//
cols
*
h
))
return
grid
def
dream
(
prompt
:
str
,
ddim_steps
:
int
,
sampler_name
:
str
,
fixed_code
:
bool
,
use_GFPGAN
:
bool
,
ddim_eta
:
float
,
n_iter
:
int
,
n_samples
:
int
,
cfg_scale
:
float
,
seed
:
int
,
height
:
int
,
width
:
int
):
torch
.
cuda
.
empty_cache
()
outpath
=
opt
.
outdir
or
"outputs/txt2img-samples"
if
seed
==
-
1
:
seed
=
random
.
randrange
(
4294967294
)
seed
=
int
(
seed
)
is_PLMS
=
sampler_name
==
'PLMS'
is_DDIM
=
sampler_name
==
'DDIM'
is_Kdif
=
sampler_name
==
'k-diffusion'
sampler
=
None
if
is_PLMS
:
sampler
=
PLMSSampler
(
model
)
elif
is_DDIM
:
sampler
=
DDIMSampler
(
model
)
elif
is_Kdif
:
pass
else
:
raise
Exception
(
"Unknown sampler: "
+
sampler_name
)
model_wrap
=
K
.
external
.
CompVisDenoiser
(
model
)
os
.
makedirs
(
outpath
,
exist_ok
=
True
)
batch_size
=
n_samples
n_rows
=
opt
.
n_rows
if
opt
.
n_rows
>
0
else
batch_size
assert
prompt
is
not
None
data
=
[
batch_size
*
[
prompt
]]
sample_path
=
os
.
path
.
join
(
outpath
,
"samples"
)
os
.
makedirs
(
sample_path
,
exist_ok
=
True
)
base_count
=
len
(
os
.
listdir
(
sample_path
))
grid_count
=
len
(
os
.
listdir
(
outpath
))
-
1
start_code
=
None
if
fixed_code
:
start_code
=
torch
.
randn
([
n_samples
,
opt_C
,
height
//
opt_f
,
width
//
opt_f
],
device
=
device
)
precision_scope
=
autocast
if
opt
.
precision
==
"autocast"
else
nullcontext
output_images
=
[]
with
torch
.
no_grad
(),
precision_scope
(
"cuda"
),
model
.
ema_scope
():
all_samples
=
[]
for
n
in
range
(
n_iter
):
for
batch_index
,
prompts
in
enumerate
(
data
):
uc
=
None
if
cfg_scale
!=
1.0
:
uc
=
model
.
get_learned_conditioning
(
batch_size
*
[
""
])
if
isinstance
(
prompts
,
tuple
):
prompts
=
list
(
prompts
)
c
=
model
.
get_learned_conditioning
(
prompts
)
shape
=
[
opt_C
,
height
//
opt_f
,
width
//
opt_f
]
current_seed
=
seed
+
n
*
len
(
data
)
+
batch_index
torch
.
manual_seed
(
current_seed
)
if
is_Kdif
:
sigmas
=
model_wrap
.
get_sigmas
(
ddim_steps
)
x
=
torch
.
randn
([
n_samples
,
*
shape
],
device
=
device
)
*
sigmas
[
0
]
# for GPU draw
model_wrap_cfg
=
CFGDenoiser
(
model_wrap
)
samples_ddim
=
K
.
sampling
.
sample_lms
(
model_wrap_cfg
,
x
,
sigmas
,
extra_args
=
{
'cond'
:
c
,
'uncond'
:
uc
,
'cond_scale'
:
cfg_scale
},
disable
=
False
)
elif
sampler
is
not
None
:
samples_ddim
,
_
=
sampler
.
sample
(
S
=
ddim_steps
,
conditioning
=
c
,
batch_size
=
n_samples
,
shape
=
shape
,
verbose
=
False
,
unconditional_guidance_scale
=
cfg_scale
,
unconditional_conditioning
=
uc
,
eta
=
ddim_eta
,
x_T
=
start_code
)
x_samples_ddim
=
model
.
decode_first_stage
(
samples_ddim
)
x_samples_ddim
=
torch
.
clamp
((
x_samples_ddim
+
1.0
)
/
2.0
,
min
=
0.0
,
max
=
1.0
)
if
not
opt
.
skip_save
or
not
opt
.
skip_grid
:
for
x_sample
in
x_samples_ddim
:
x_sample
=
255.
*
rearrange
(
x_sample
.
cpu
()
.
numpy
(),
'c h w -> h w c'
)
x_sample
=
x_sample
.
astype
(
np
.
uint8
)
if
use_GFPGAN
and
GFPGAN
is
not
None
:
cropped_faces
,
restored_faces
,
restored_img
=
GFPGAN
.
enhance
(
x_sample
,
has_aligned
=
False
,
only_center_face
=
False
,
paste_back
=
True
)
x_sample
=
restored_img
image
=
Image
.
fromarray
(
x_sample
)
image
.
save
(
os
.
path
.
join
(
sample_path
,
f
"{base_count:05}-{current_seed}_{prompt.replace(' ', '_')[:128]}.png"
))
output_images
.
append
(
image
)
base_count
+=
1
if
not
opt
.
skip_grid
:
all_samples
.
append
(
x_sample
)
if
not
opt
.
skip_grid
:
# additionally, save as grid
grid
=
image_grid
(
output_images
,
rows
=
n_rows
)
grid
.
save
(
os
.
path
.
join
(
outpath
,
f
'grid-{grid_count:04}.png'
))
grid_count
+=
1
if
sampler
is
not
None
:
del
sampler
info
=
f
"""
{prompt}
Steps: {ddim_steps}, Sampler: {sampler_name}, CFG scale: {cfg_scale}, Seed: {seed}{', GFPGAN' if use_GFPGAN and GFPGAN is not None else ''}
"""
.
strip
()
return
output_images
,
seed
,
info
dream_interface
=
gr
.
Interface
(
dream
,
inputs
=
[
gr
.
Textbox
(
label
=
"Prompt"
,
placeholder
=
"A corgi wearing a top hat as an oil painting."
,
lines
=
1
),
gr
.
Slider
(
minimum
=
1
,
maximum
=
150
,
step
=
1
,
label
=
"Sampling Steps"
,
value
=
50
),
gr
.
Radio
(
label
=
'Sampling method'
,
choices
=
[
"DDIM"
,
"PLMS"
,
"k-diffusion"
],
value
=
"k-diffusion"
),
gr
.
Checkbox
(
label
=
'Enable Fixed Code sampling'
,
value
=
False
),
gr
.
Checkbox
(
label
=
'Fix faces using GFPGAN'
,
value
=
False
,
visible
=
GFPGAN
is
not
None
),
gr
.
Slider
(
minimum
=
0.0
,
maximum
=
1.0
,
step
=
0.01
,
label
=
"DDIM ETA"
,
value
=
0.0
,
visible
=
False
),
gr
.
Slider
(
minimum
=
1
,
maximum
=
16
,
step
=
1
,
label
=
'Sampling iterations'
,
value
=
1
),
gr
.
Slider
(
minimum
=
1
,
maximum
=
4
,
step
=
1
,
label
=
'Samples per iteration'
,
value
=
1
),
gr
.
Slider
(
minimum
=
1.0
,
maximum
=
15.0
,
step
=
0.5
,
label
=
'Classifier Free Guidance Scale'
,
value
=
7.0
),
gr
.
Number
(
label
=
'Seed'
,
value
=-
1
),
gr
.
Slider
(
minimum
=
64
,
maximum
=
2048
,
step
=
64
,
label
=
"Height"
,
value
=
512
),
gr
.
Slider
(
minimum
=
64
,
maximum
=
2048
,
step
=
64
,
label
=
"Width"
,
value
=
512
),
],
outputs
=
[
gr
.
Gallery
(
label
=
"Images"
),
gr
.
Number
(
label
=
'Seed'
),
gr
.
Textbox
(
label
=
"Copy-paste generation parameters"
),
],
title
=
"Stable Diffusion Text-to-Image K"
,
description
=
"Generate images from text with Stable Diffusion (using K-LMS)"
,
allow_flagging
=
"never"
)
def
translation
(
prompt
:
str
,
init_img
,
ddim_steps
:
int
,
ddim_eta
:
float
,
n_iter
:
int
,
n_samples
:
int
,
cfg_scale
:
float
,
denoising_strength
:
float
,
seed
:
int
,
height
:
int
,
width
:
int
):
torch
.
cuda
.
empty_cache
()
outpath
=
opt
.
outdir
or
"outputs/img2img-samples"
if
seed
==
-
1
:
seed
=
random
.
randrange
(
4294967294
)
sampler
=
DDIMSampler
(
model
)
model_wrap
=
K
.
external
.
CompVisDenoiser
(
model
)
os
.
makedirs
(
outpath
,
exist_ok
=
True
)
batch_size
=
n_samples
n_rows
=
opt
.
n_rows
if
opt
.
n_rows
>
0
else
batch_size
assert
prompt
is
not
None
data
=
[
batch_size
*
[
prompt
]]
sample_path
=
os
.
path
.
join
(
outpath
,
"samples"
)
os
.
makedirs
(
sample_path
,
exist_ok
=
True
)
base_count
=
len
(
os
.
listdir
(
sample_path
))
grid_count
=
len
(
os
.
listdir
(
outpath
))
-
1
seedit
=
0
image
=
init_img
.
convert
(
"RGB"
)
w
,
h
=
image
.
size
image
=
np
.
array
(
image
)
.
astype
(
np
.
float32
)
/
255.0
image
=
image
[
None
]
.
transpose
(
0
,
3
,
1
,
2
)
image
=
torch
.
from_numpy
(
image
)
output_images
=
[]
precision_scope
=
autocast
if
opt
.
precision
==
"autocast"
else
nullcontext
with
torch
.
no_grad
():
with
precision_scope
(
"cuda"
):
init_image
=
2.
*
image
-
1.
init_image
=
init_image
.
to
(
device
)
init_image
=
repeat
(
init_image
,
'1 ... -> b ...'
,
b
=
batch_size
)
init_latent
=
model
.
get_first_stage_encoding
(
model
.
encode_first_stage
(
init_image
))
# move to latent space
x0
=
init_latent
sampler
.
make_schedule
(
ddim_num_steps
=
ddim_steps
,
ddim_eta
=
ddim_eta
,
verbose
=
False
)
assert
0.
<=
denoising_strength
<=
1.
,
'can only work with strength in [0.0, 1.0]'
t_enc
=
int
(
denoising_strength
*
ddim_steps
)
print
(
f
"target t_enc is {t_enc} steps"
)
with
model
.
ema_scope
():
all_samples
=
list
()
for
n
in
range
(
n_iter
):
for
batch_index
,
prompts
in
enumerate
(
data
):
uc
=
None
if
cfg_scale
!=
1.0
:
uc
=
model
.
get_learned_conditioning
(
batch_size
*
[
""
])
if
isinstance
(
prompts
,
tuple
):
prompts
=
list
(
prompts
)
c
=
model
.
get_learned_conditioning
(
prompts
)
sigmas
=
model_wrap
.
get_sigmas
(
ddim_steps
)
current_seed
=
seed
+
n
*
len
(
data
)
+
batch_index
torch
.
manual_seed
(
current_seed
)
noise
=
torch
.
randn_like
(
x0
)
*
sigmas
[
ddim_steps
-
t_enc
-
1
]
# for GPU draw
xi
=
x0
+
noise
sigma_sched
=
sigmas
[
ddim_steps
-
t_enc
-
1
:]
# x = torch.randn([n_samples, *shape]).to(device) * sigmas[0] # for CPU draw
model_wrap_cfg
=
CFGDenoiser
(
model_wrap
)
extra_args
=
{
'cond'
:
c
,
'uncond'
:
uc
,
'cond_scale'
:
cfg_scale
}
samples_ddim
=
K
.
sampling
.
sample_lms
(
model_wrap_cfg
,
xi
,
sigma_sched
,
extra_args
=
extra_args
,
disable
=
False
)
x_samples_ddim
=
model
.
decode_first_stage
(
samples_ddim
)
x_samples_ddim
=
torch
.
clamp
((
x_samples_ddim
+
1.0
)
/
2.0
,
min
=
0.0
,
max
=
1.0
)
if
not
opt
.
skip_save
:
for
x_sample
in
x_samples_ddim
:
x_sample
=
255.
*
rearrange
(
x_sample
.
cpu
()
.
numpy
(),
'c h w -> h w c'
)
image
=
Image
.
fromarray
(
x_sample
.
astype
(
np
.
uint8
))
image
.
save
(
os
.
path
.
join
(
sample_path
,
f
"{base_count:05}-{current_seed}_{prompt.replace(' ', '_')[:128]}.png"
))
output_images
.
append
(
image
)
base_count
+=
1
seedit
+=
1
if
not
opt
.
skip_grid
:
all_samples
.
append
(
x_samples_ddim
)
if
not
opt
.
skip_grid
:
# additionally, save as grid
grid
=
torch
.
stack
(
all_samples
,
0
)
grid
=
rearrange
(
grid
,
'n b c h w -> (n b) c h w'
)
grid
=
make_grid
(
grid
,
nrow
=
n_rows
)
# to image
grid
=
255.
*
rearrange
(
grid
,
'c h w -> h w c'
)
.
cpu
()
.
numpy
()
Image
.
fromarray
(
grid
.
astype
(
np
.
uint8
))
.
save
(
os
.
path
.
join
(
outpath
,
f
'grid-{grid_count:04}.png'
))
Image
.
fromarray
(
grid
.
astype
(
np
.
uint8
))
grid_count
+=
1
del
sampler
return
output_images
,
seed
# prompt, init_img, ddim_steps, plms, ddim_eta, n_iter, n_samples, cfg_scale, denoising_strength, seed
img2img_interface
=
gr
.
Interface
(
translation
,
inputs
=
[
gr
.
Textbox
(
placeholder
=
"A fantasy landscape, trending on artstation."
,
lines
=
1
),
gr
.
Image
(
value
=
"https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg"
,
source
=
"upload"
,
interactive
=
True
,
type
=
"pil"
),
gr
.
Slider
(
minimum
=
1
,
maximum
=
150
,
step
=
1
,
label
=
"Sampling Steps"
,
value
=
50
),
gr
.
Slider
(
minimum
=
0.0
,
maximum
=
1.0
,
step
=
0.01
,
label
=
"DDIM ETA"
,
value
=
0.0
,
visible
=
False
),
gr
.
Slider
(
minimum
=
1
,
maximum
=
50
,
step
=
1
,
label
=
'Sampling iterations'
,
value
=
2
),
gr
.
Slider
(
minimum
=
1
,
maximum
=
8
,
step
=
1
,
label
=
'Samples per iteration'
,
value
=
2
),
gr
.
Slider
(
minimum
=
1.0
,
maximum
=
15.0
,
step
=
0.5
,
label
=
'Classifier Free Guidance Scale'
,
value
=
7.0
),
gr
.
Slider
(
minimum
=
0.0
,
maximum
=
1.0
,
step
=
0.01
,
label
=
'Denoising Strength'
,
value
=
0.75
),
gr
.
Number
(
label
=
'Seed'
,
value
=-
1
),
gr
.
Slider
(
minimum
=
64
,
maximum
=
2048
,
step
=
64
,
label
=
"Resize Height"
,
value
=
512
),
gr
.
Slider
(
minimum
=
64
,
maximum
=
2048
,
step
=
64
,
label
=
"Resize Width"
,
value
=
512
),
],
outputs
=
[
gr
.
Gallery
(),
gr
.
Number
(
label
=
'Seed'
)
],
title
=
"Stable Diffusion Image-to-Image"
,
description
=
"Generate images from images with Stable Diffusion"
,
)
demo
=
gr
.
TabbedInterface
(
interface_list
=
[
dream_interface
,
img2img_interface
],
tab_names
=
[
"Dream"
,
"Image Translation"
])
demo
.
launch
()
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment