Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Contribute to GitLab
Sign in / Register
Toggle navigation
S
stable-diffusion-webui
Project
Project
Details
Activity
Cycle Analytics
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Charts
Issues
0
Issues
0
List
Board
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Charts
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Charts
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
Administrator
stable-diffusion-webui
Commits
598f7fcd
Commit
598f7fcd
authored
Jan 15, 2023
by
aria1th
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
Fix loss_dict problem
parent
205991df
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
3 additions
and
1 deletion
+3
-1
hypernetwork.py
modules/hypernetworks/hypernetwork.py
+3
-1
No files found.
modules/hypernetworks/hypernetwork.py
View file @
598f7fcd
...
@@ -561,6 +561,7 @@ def train_hypernetwork(id_task, hypernetwork_name, learn_rate, batch_size, gradi
...
@@ -561,6 +561,7 @@ def train_hypernetwork(id_task, hypernetwork_name, learn_rate, batch_size, gradi
_loss_step
=
0
#internal
_loss_step
=
0
#internal
# size = len(ds.indexes)
# size = len(ds.indexes)
# loss_dict = defaultdict(lambda : deque(maxlen = 1024))
# loss_dict = defaultdict(lambda : deque(maxlen = 1024))
loss_logging
=
[]
# losses = torch.zeros((size,))
# losses = torch.zeros((size,))
# previous_mean_losses = [0]
# previous_mean_losses = [0]
# previous_mean_loss = 0
# previous_mean_loss = 0
...
@@ -601,6 +602,7 @@ def train_hypernetwork(id_task, hypernetwork_name, learn_rate, batch_size, gradi
...
@@ -601,6 +602,7 @@ def train_hypernetwork(id_task, hypernetwork_name, learn_rate, batch_size, gradi
else
:
else
:
c
=
stack_conds
(
batch
.
cond
)
.
to
(
devices
.
device
,
non_blocking
=
pin_memory
)
c
=
stack_conds
(
batch
.
cond
)
.
to
(
devices
.
device
,
non_blocking
=
pin_memory
)
loss
=
shared
.
sd_model
(
x
,
c
)[
0
]
/
gradient_step
loss
=
shared
.
sd_model
(
x
,
c
)[
0
]
/
gradient_step
loss_logging
.
append
(
loss
.
item
())
del
x
del
x
del
c
del
c
...
@@ -644,7 +646,7 @@ def train_hypernetwork(id_task, hypernetwork_name, learn_rate, batch_size, gradi
...
@@ -644,7 +646,7 @@ def train_hypernetwork(id_task, hypernetwork_name, learn_rate, batch_size, gradi
if
shared
.
opts
.
training_enable_tensorboard
:
if
shared
.
opts
.
training_enable_tensorboard
:
epoch_num
=
hypernetwork
.
step
//
len
(
ds
)
epoch_num
=
hypernetwork
.
step
//
len
(
ds
)
epoch_step
=
hypernetwork
.
step
-
(
epoch_num
*
len
(
ds
))
+
1
epoch_step
=
hypernetwork
.
step
-
(
epoch_num
*
len
(
ds
))
+
1
mean_loss
=
sum
(
sum
(
x
)
for
x
in
loss_dict
.
values
())
/
sum
(
len
(
x
)
for
x
in
loss_dict
.
values
()
)
mean_loss
=
sum
(
loss_logging
)
/
len
(
loss_logging
)
textual_inversion
.
tensorboard_add
(
tensorboard_writer
,
loss
=
mean_loss
,
global_step
=
hypernetwork
.
step
,
step
=
epoch_step
,
learn_rate
=
scheduler
.
learn_rate
,
epoch_num
=
epoch_num
)
textual_inversion
.
tensorboard_add
(
tensorboard_writer
,
loss
=
mean_loss
,
global_step
=
hypernetwork
.
step
,
step
=
epoch_step
,
learn_rate
=
scheduler
.
learn_rate
,
epoch_num
=
epoch_num
)
textual_inversion
.
write_loss
(
log_directory
,
"hypernetwork_loss.csv"
,
hypernetwork
.
step
,
steps_per_epoch
,
{
textual_inversion
.
write_loss
(
log_directory
,
"hypernetwork_loss.csv"
,
hypernetwork
.
step
,
steps_per_epoch
,
{
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment