Commit 5de80618 authored by AUTOMATIC's avatar AUTOMATIC

Merge branch 'master' into hypernetwork-training

parents 12c4d5c6 94853395
# Please read the [contributing wiki page](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Contributing) before submitting a pull request!
If you have a large change, pay special attention to this paragraph:
> Before making changes, if you think that your feature will result in more than 100 lines changing, find me and talk to me about the feature you are proposing. It pains me to reject the hard work someone else did, but I won't add everything to the repo, and it's better if the rejection happens before you have to waste time working on the feature.
Otherwise, after making sure you're following the rules described in wiki page, remove this section and continue on.
**Describe what this pull request is trying to achieve.**
A clear and concise description of what you're trying to accomplish with this, so your intent doesn't have to be extracted from your code.
**Additional notes and description of your changes**
More technical discussion about your changes go here, plus anything that a maintainer might have to specifically take a look at, or be wary of.
**Environment this was tested in**
List the environment you have developed / tested this on. As per the contributing page, changes should be able to work on Windows out of the box.
- OS: [e.g. Windows, Linux]
- Browser [e.g. chrome, safari]
- Graphics card [e.g. NVIDIA RTX 2080 8GB, AMD RX 6600 8GB]
**Screenshots or videos of your changes**
If applicable, screenshots or a video showing off your changes. If it edits an existing UI, it should ideally contain a comparison of what used to be there, before your changes were made.
This is **required** for anything that touches the user interface.
\ No newline at end of file
...@@ -16,7 +16,7 @@ Check the [custom scripts](https://github.com/AUTOMATIC1111/stable-diffusion-web ...@@ -16,7 +16,7 @@ Check the [custom scripts](https://github.com/AUTOMATIC1111/stable-diffusion-web
- Attention, specify parts of text that the model should pay more attention to - Attention, specify parts of text that the model should pay more attention to
- a man in a ((tuxedo)) - will pay more attention to tuxedo - a man in a ((tuxedo)) - will pay more attention to tuxedo
- a man in a (tuxedo:1.21) - alternative syntax - a man in a (tuxedo:1.21) - alternative syntax
- select text and press ctrl+up or ctrl+down to aduotmatically adjust attention to selected text - select text and press ctrl+up or ctrl+down to automatically adjust attention to selected text (code contributed by anonymous user)
- Loopback, run img2img processing multiple times - Loopback, run img2img processing multiple times
- X/Y plot, a way to draw a 2 dimensional plot of images with different parameters - X/Y plot, a way to draw a 2 dimensional plot of images with different parameters
- Textual Inversion - Textual Inversion
...@@ -34,7 +34,7 @@ Check the [custom scripts](https://github.com/AUTOMATIC1111/stable-diffusion-web ...@@ -34,7 +34,7 @@ Check the [custom scripts](https://github.com/AUTOMATIC1111/stable-diffusion-web
- Sampling method selection - Sampling method selection
- Interrupt processing at any time - Interrupt processing at any time
- 4GB video card support (also reports of 2GB working) - 4GB video card support (also reports of 2GB working)
- Correct seeds for batches - Correct seeds for batches
- Prompt length validation - Prompt length validation
- get length of prompt in tokens as you type - get length of prompt in tokens as you type
- get a warning after generation if some text was truncated - get a warning after generation if some text was truncated
...@@ -65,6 +65,8 @@ Check the [custom scripts](https://github.com/AUTOMATIC1111/stable-diffusion-web ...@@ -65,6 +65,8 @@ Check the [custom scripts](https://github.com/AUTOMATIC1111/stable-diffusion-web
- [Composable-Diffusion](https://energy-based-model.github.io/Compositional-Visual-Generation-with-Composable-Diffusion-Models/), a way to use multiple prompts at once - [Composable-Diffusion](https://energy-based-model.github.io/Compositional-Visual-Generation-with-Composable-Diffusion-Models/), a way to use multiple prompts at once
- separate prompts using uppercase `AND` - separate prompts using uppercase `AND`
- also supports weights for prompts: `a cat :1.2 AND a dog AND a penguin :2.2` - also supports weights for prompts: `a cat :1.2 AND a dog AND a penguin :2.2`
- No token limit for prompts (original stable diffusion lets you use up to 75 tokens)
- DeepDanbooru integration, creates danbooru style tags for anime prompts (add --deepdanbooru to commandline args)
## Installation and Running ## Installation and Running
Make sure the required [dependencies](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Dependencies) are met and follow the instructions available for both [NVidia](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-NVidia-GPUs) (recommended) and [AMD](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-AMD-GPUs) GPUs. Make sure the required [dependencies](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Dependencies) are met and follow the instructions available for both [NVidia](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-NVidia-GPUs) (recommended) and [AMD](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-AMD-GPUs) GPUs.
...@@ -122,4 +124,5 @@ The documentation was moved from this README over to the project's [wiki](https: ...@@ -122,4 +124,5 @@ The documentation was moved from this README over to the project's [wiki](https:
- Noise generation for outpainting mk2 - https://github.com/parlance-zz/g-diffuser-bot - Noise generation for outpainting mk2 - https://github.com/parlance-zz/g-diffuser-bot
- CLIP interrogator idea and borrowing some code - https://github.com/pharmapsychotic/clip-interrogator - CLIP interrogator idea and borrowing some code - https://github.com/pharmapsychotic/clip-interrogator
- Initial Gradio script - posted on 4chan by an Anonymous user. Thank you Anonymous user. - Initial Gradio script - posted on 4chan by an Anonymous user. Thank you Anonymous user.
- DeepDanbooru - interrogator for anime diffusors https://github.com/KichangKim/DeepDanbooru
- (You) - (You)
contextMenuInit = function(){
let eventListenerApplied=false;
let menuSpecs = new Map();
const uid = function(){
return Date.now().toString(36) + Math.random().toString(36).substr(2);
}
function showContextMenu(event,element,menuEntries){
let posx = event.clientX + document.body.scrollLeft + document.documentElement.scrollLeft;
let posy = event.clientY + document.body.scrollTop + document.documentElement.scrollTop;
let oldMenu = gradioApp().querySelector('#context-menu')
if(oldMenu){
oldMenu.remove()
}
let tabButton = gradioApp().querySelector('button')
let baseStyle = window.getComputedStyle(tabButton)
const contextMenu = document.createElement('nav')
contextMenu.id = "context-menu"
contextMenu.style.background = baseStyle.background
contextMenu.style.color = baseStyle.color
contextMenu.style.fontFamily = baseStyle.fontFamily
contextMenu.style.top = posy+'px'
contextMenu.style.left = posx+'px'
const contextMenuList = document.createElement('ul')
contextMenuList.className = 'context-menu-items';
contextMenu.append(contextMenuList);
menuEntries.forEach(function(entry){
let contextMenuEntry = document.createElement('a')
contextMenuEntry.innerHTML = entry['name']
contextMenuEntry.addEventListener("click", function(e) {
entry['func']();
})
contextMenuList.append(contextMenuEntry);
})
gradioApp().getRootNode().appendChild(contextMenu)
let menuWidth = contextMenu.offsetWidth + 4;
let menuHeight = contextMenu.offsetHeight + 4;
let windowWidth = window.innerWidth;
let windowHeight = window.innerHeight;
if ( (windowWidth - posx) < menuWidth ) {
contextMenu.style.left = windowWidth - menuWidth + "px";
}
if ( (windowHeight - posy) < menuHeight ) {
contextMenu.style.top = windowHeight - menuHeight + "px";
}
}
function appendContextMenuOption(targetEmementSelector,entryName,entryFunction){
currentItems = menuSpecs.get(targetEmementSelector)
if(!currentItems){
currentItems = []
menuSpecs.set(targetEmementSelector,currentItems);
}
let newItem = {'id':targetEmementSelector+'_'+uid(),
'name':entryName,
'func':entryFunction,
'isNew':true}
currentItems.push(newItem)
return newItem['id']
}
function removeContextMenuOption(uid){
menuSpecs.forEach(function(v,k) {
let index = -1
v.forEach(function(e,ei){if(e['id']==uid){index=ei}})
if(index>=0){
v.splice(index, 1);
}
})
}
function addContextMenuEventListener(){
if(eventListenerApplied){
return;
}
gradioApp().addEventListener("click", function(e) {
let source = e.composedPath()[0]
if(source.id && source.indexOf('check_progress')>-1){
return
}
let oldMenu = gradioApp().querySelector('#context-menu')
if(oldMenu){
oldMenu.remove()
}
});
gradioApp().addEventListener("contextmenu", function(e) {
let oldMenu = gradioApp().querySelector('#context-menu')
if(oldMenu){
oldMenu.remove()
}
menuSpecs.forEach(function(v,k) {
if(e.composedPath()[0].matches(k)){
showContextMenu(e,e.composedPath()[0],v)
e.preventDefault()
return
}
})
});
eventListenerApplied=true
}
return [appendContextMenuOption, removeContextMenuOption, addContextMenuEventListener]
}
initResponse = contextMenuInit()
appendContextMenuOption = initResponse[0]
removeContextMenuOption = initResponse[1]
addContextMenuEventListener = initResponse[2]
//Start example Context Menu Items
generateOnRepeatId = appendContextMenuOption('#txt2img_generate','Generate forever',function(){
let genbutton = gradioApp().querySelector('#txt2img_generate');
let interruptbutton = gradioApp().querySelector('#txt2img_interrupt');
if(!interruptbutton.offsetParent){
genbutton.click();
}
clearInterval(window.generateOnRepeatInterval)
window.generateOnRepeatInterval = setInterval(function(){
if(!interruptbutton.offsetParent){
genbutton.click();
}
},
500)}
)
cancelGenerateForever = function(){
clearInterval(window.generateOnRepeatInterval)
}
appendContextMenuOption('#txt2img_interrupt','Cancel generate forever',cancelGenerateForever)
appendContextMenuOption('#txt2img_generate', 'Cancel generate forever',cancelGenerateForever)
appendContextMenuOption('#roll','Roll three',
function(){
let rollbutton = gradioApp().querySelector('#roll');
setTimeout(function(){rollbutton.click()},100)
setTimeout(function(){rollbutton.click()},200)
setTimeout(function(){rollbutton.click()},300)
}
)
//End example Context Menu Items
onUiUpdate(function(){
addContextMenuEventListener()
});
addEventListener('keydown', (event) => { addEventListener('keydown', (event) => {
let target = event.originalTarget; let target = event.originalTarget || event.composedPath()[0];
if (!target.hasAttribute("placeholder")) return; if (!target.hasAttribute("placeholder")) return;
if (!target.placeholder.toLowerCase().includes("prompt")) return; if (!target.placeholder.toLowerCase().includes("prompt")) return;
......
...@@ -35,6 +35,7 @@ titles = { ...@@ -35,6 +35,7 @@ titles = {
"Denoising strength": "Determines how little respect the algorithm should have for image's content. At 0, nothing will change, and at 1 you'll get an unrelated image. With values below 1.0, processing will take less steps than the Sampling Steps slider specifies.", "Denoising strength": "Determines how little respect the algorithm should have for image's content. At 0, nothing will change, and at 1 you'll get an unrelated image. With values below 1.0, processing will take less steps than the Sampling Steps slider specifies.",
"Denoising strength change factor": "In loopback mode, on each loop the denoising strength is multiplied by this value. <1 means decreasing variety so your sequence will converge on a fixed picture. >1 means increasing variety so your sequence will become more and more chaotic.", "Denoising strength change factor": "In loopback mode, on each loop the denoising strength is multiplied by this value. <1 means decreasing variety so your sequence will converge on a fixed picture. >1 means increasing variety so your sequence will become more and more chaotic.",
"Skip": "Stop processing current image and continue processing.",
"Interrupt": "Stop processing images and return any results accumulated so far.", "Interrupt": "Stop processing images and return any results accumulated so far.",
"Save": "Write image to a directory (default - log/images) and generation parameters into csv file.", "Save": "Write image to a directory (default - log/images) and generation parameters into csv file.",
...@@ -78,6 +79,8 @@ titles = { ...@@ -78,6 +79,8 @@ titles = {
"Highres. fix": "Use a two step process to partially create an image at smaller resolution, upscale, and then improve details in it without changing composition", "Highres. fix": "Use a two step process to partially create an image at smaller resolution, upscale, and then improve details in it without changing composition",
"Scale latent": "Uscale the image in latent space. Alternative is to produce the full image from latent representation, upscale that, and then move it back to latent space.", "Scale latent": "Uscale the image in latent space. Alternative is to produce the full image from latent representation, upscale that, and then move it back to latent space.",
"Eta noise seed delta": "If this values is non-zero, it will be added to seed and used to initialize RNG for noises when using samplers with Eta. You can use this to produce even more variation of images, or you can use this to match images of other software if you know what you are doing.",
"Do not add watermark to images": "If this option is enabled, watermark will not be added to created images. Warning: if you do not add watermark, you may be bevaing in an unethical manner.",
} }
......
// A full size 'lightbox' preview modal shown when left clicking on gallery previews // A full size 'lightbox' preview modal shown when left clicking on gallery previews
function closeModal() { function closeModal() {
gradioApp().getElementById("lightboxModal").style.display = "none"; gradioApp().getElementById("lightboxModal").style.display = "none";
} }
function showModal(event) { function showModal(event) {
const source = event.target || event.srcElement; const source = event.target || event.srcElement;
const modalImage = gradioApp().getElementById("modalImage") const modalImage = gradioApp().getElementById("modalImage")
const lb = gradioApp().getElementById("lightboxModal") const lb = gradioApp().getElementById("lightboxModal")
modalImage.src = source.src modalImage.src = source.src
if (modalImage.style.display === 'none') { if (modalImage.style.display === 'none') {
lb.style.setProperty('background-image', 'url(' + source.src + ')'); lb.style.setProperty('background-image', 'url(' + source.src + ')');
} }
lb.style.display = "block"; lb.style.display = "block";
lb.focus() lb.focus()
event.stopPropagation() event.stopPropagation()
} }
function negmod(n, m) { function negmod(n, m) {
return ((n % m) + m) % m; return ((n % m) + m) % m;
} }
function modalImageSwitch(offset){ function updateOnBackgroundChange() {
var allgalleryButtons = gradioApp().querySelectorAll(".gallery-item.transition-all") const modalImage = gradioApp().getElementById("modalImage")
var galleryButtons = [] if (modalImage && modalImage.offsetParent) {
allgalleryButtons.forEach(function(elem){ let allcurrentButtons = gradioApp().querySelectorAll(".gallery-item.transition-all.\\!ring-2")
if(elem.parentElement.offsetParent){ let currentButton = null
galleryButtons.push(elem); allcurrentButtons.forEach(function(elem) {
if (elem.parentElement.offsetParent) {
currentButton = elem;
}
})
if (modalImage.src != currentButton.children[0].src) {
modalImage.src = currentButton.children[0].src;
if (modalImage.style.display === 'none') {
modal.style.setProperty('background-image', `url(${modalImage.src})`)
}
}
} }
}) }
if(galleryButtons.length>1){ function modalImageSwitch(offset) {
var allcurrentButtons = gradioApp().querySelectorAll(".gallery-item.transition-all.\\!ring-2") var allgalleryButtons = gradioApp().querySelectorAll(".gallery-item.transition-all")
var currentButton = null var galleryButtons = []
allcurrentButtons.forEach(function(elem){ allgalleryButtons.forEach(function(elem) {
if(elem.parentElement.offsetParent){ if (elem.parentElement.offsetParent) {
currentButton = elem; galleryButtons.push(elem);
} }
}) })
var result = -1 if (galleryButtons.length > 1) {
galleryButtons.forEach(function(v, i){ if(v==currentButton) { result = i } }) var allcurrentButtons = gradioApp().querySelectorAll(".gallery-item.transition-all.\\!ring-2")
var currentButton = null
if(result != -1){ allcurrentButtons.forEach(function(elem) {
nextButton = galleryButtons[negmod((result+offset),galleryButtons.length)] if (elem.parentElement.offsetParent) {
nextButton.click() currentButton = elem;
const modalImage = gradioApp().getElementById("modalImage"); }
const modal = gradioApp().getElementById("lightboxModal"); })
modalImage.src = nextButton.children[0].src;
if (modalImage.style.display === 'none') { var result = -1
modal.style.setProperty('background-image', `url(${modalImage.src})`) galleryButtons.forEach(function(v, i) {
if (v == currentButton) {
result = i
}
})
if (result != -1) {
nextButton = galleryButtons[negmod((result + offset), galleryButtons.length)]
nextButton.click()
const modalImage = gradioApp().getElementById("modalImage");
const modal = gradioApp().getElementById("lightboxModal");
modalImage.src = nextButton.children[0].src;
if (modalImage.style.display === 'none') {
modal.style.setProperty('background-image', `url(${modalImage.src})`)
}
setTimeout(function() {
modal.focus()
}, 10)
} }
setTimeout( function(){modal.focus()},10) }
}
}
} }
function modalNextImage(event){ function modalNextImage(event) {
modalImageSwitch(1) modalImageSwitch(1)
event.stopPropagation() event.stopPropagation()
} }
function modalPrevImage(event){ function modalPrevImage(event) {
modalImageSwitch(-1) modalImageSwitch(-1)
event.stopPropagation() event.stopPropagation()
} }
function modalKeyHandler(event){ function modalKeyHandler(event) {
switch (event.key) { switch (event.key) {
case "ArrowLeft": case "ArrowLeft":
modalPrevImage(event) modalPrevImage(event)
...@@ -80,21 +105,22 @@ function modalKeyHandler(event){ ...@@ -80,21 +105,22 @@ function modalKeyHandler(event){
} }
} }
function showGalleryImage(){ function showGalleryImage() {
setTimeout(function() { setTimeout(function() {
fullImg_preview = gradioApp().querySelectorAll('img.w-full.object-contain') fullImg_preview = gradioApp().querySelectorAll('img.w-full.object-contain')
if(fullImg_preview != null){ if (fullImg_preview != null) {
fullImg_preview.forEach(function function_name(e) { fullImg_preview.forEach(function function_name(e) {
if (e.dataset.modded)
return;
e.dataset.modded = true;
if(e && e.parentElement.tagName == 'DIV'){ if(e && e.parentElement.tagName == 'DIV'){
e.style.cursor='pointer' e.style.cursor='pointer'
e.addEventListener('click', function (evt) { e.addEventListener('click', function (evt) {
if(!opts.js_modal_lightbox) return; if(!opts.js_modal_lightbox) return;
modalZoomSet(gradioApp().getElementById('modalImage'), opts.js_modal_lightbox_initialy_zoomed) modalZoomSet(gradioApp().getElementById('modalImage'), opts.js_modal_lightbox_initially_zoomed)
showModal(evt) showModal(evt)
},true); }, true);
} }
}); });
} }
...@@ -102,21 +128,21 @@ function showGalleryImage(){ ...@@ -102,21 +128,21 @@ function showGalleryImage(){
}, 100); }, 100);
} }
function modalZoomSet(modalImage, enable){ function modalZoomSet(modalImage, enable) {
if( enable ){ if (enable) {
modalImage.classList.add('modalImageFullscreen'); modalImage.classList.add('modalImageFullscreen');
} else{ } else {
modalImage.classList.remove('modalImageFullscreen'); modalImage.classList.remove('modalImageFullscreen');
} }
} }
function modalZoomToggle(event){ function modalZoomToggle(event) {
modalImage = gradioApp().getElementById("modalImage"); modalImage = gradioApp().getElementById("modalImage");
modalZoomSet(modalImage, !modalImage.classList.contains('modalImageFullscreen')) modalZoomSet(modalImage, !modalImage.classList.contains('modalImageFullscreen'))
event.stopPropagation() event.stopPropagation()
} }
function modalTileImageToggle(event){ function modalTileImageToggle(event) {
const modalImage = gradioApp().getElementById("modalImage"); const modalImage = gradioApp().getElementById("modalImage");
const modal = gradioApp().getElementById("lightboxModal"); const modal = gradioApp().getElementById("lightboxModal");
const isTiling = modalImage.style.display === 'none'; const isTiling = modalImage.style.display === 'none';
...@@ -131,17 +157,18 @@ function modalTileImageToggle(event){ ...@@ -131,17 +157,18 @@ function modalTileImageToggle(event){
event.stopPropagation() event.stopPropagation()
} }
function galleryImageHandler(e){ function galleryImageHandler(e) {
if(e && e.parentElement.tagName == 'BUTTON'){ if (e && e.parentElement.tagName == 'BUTTON') {
e.onclick = showGalleryImage; e.onclick = showGalleryImage;
} }
} }
onUiUpdate(function(){ onUiUpdate(function() {
fullImg_preview = gradioApp().querySelectorAll('img.w-full') fullImg_preview = gradioApp().querySelectorAll('img.w-full')
if(fullImg_preview != null){ if (fullImg_preview != null) {
fullImg_preview.forEach(galleryImageHandler); fullImg_preview.forEach(galleryImageHandler);
} }
updateOnBackgroundChange();
}) })
document.addEventListener("DOMContentLoaded", function() { document.addEventListener("DOMContentLoaded", function() {
...@@ -149,13 +176,13 @@ document.addEventListener("DOMContentLoaded", function() { ...@@ -149,13 +176,13 @@ document.addEventListener("DOMContentLoaded", function() {
const modal = document.createElement('div') const modal = document.createElement('div')
modal.onclick = closeModal; modal.onclick = closeModal;
modal.id = "lightboxModal"; modal.id = "lightboxModal";
modal.tabIndex=0 modal.tabIndex = 0
modal.addEventListener('keydown', modalKeyHandler, true) modal.addEventListener('keydown', modalKeyHandler, true)
const modalControls = document.createElement('div') const modalControls = document.createElement('div')
modalControls.className = 'modalControls gradio-container'; modalControls.className = 'modalControls gradio-container';
modal.append(modalControls); modal.append(modalControls);
const modalZoom = document.createElement('span') const modalZoom = document.createElement('span')
modalZoom.className = 'modalZoom cursor'; modalZoom.className = 'modalZoom cursor';
modalZoom.innerHTML = '&#10529;' modalZoom.innerHTML = '&#10529;'
...@@ -180,30 +207,30 @@ document.addEventListener("DOMContentLoaded", function() { ...@@ -180,30 +207,30 @@ document.addEventListener("DOMContentLoaded", function() {
const modalImage = document.createElement('img') const modalImage = document.createElement('img')
modalImage.id = 'modalImage'; modalImage.id = 'modalImage';
modalImage.onclick = closeModal; modalImage.onclick = closeModal;
modalImage.tabIndex=0 modalImage.tabIndex = 0
modalImage.addEventListener('keydown', modalKeyHandler, true) modalImage.addEventListener('keydown', modalKeyHandler, true)
modal.appendChild(modalImage) modal.appendChild(modalImage)
const modalPrev = document.createElement('a') const modalPrev = document.createElement('a')
modalPrev.className = 'modalPrev'; modalPrev.className = 'modalPrev';
modalPrev.innerHTML = '&#10094;' modalPrev.innerHTML = '&#10094;'
modalPrev.tabIndex=0 modalPrev.tabIndex = 0
modalPrev.addEventListener('click',modalPrevImage,true); modalPrev.addEventListener('click', modalPrevImage, true);
modalPrev.addEventListener('keydown', modalKeyHandler, true) modalPrev.addEventListener('keydown', modalKeyHandler, true)
modal.appendChild(modalPrev) modal.appendChild(modalPrev)
const modalNext = document.createElement('a') const modalNext = document.createElement('a')
modalNext.className = 'modalNext'; modalNext.className = 'modalNext';
modalNext.innerHTML = '&#10095;' modalNext.innerHTML = '&#10095;'
modalNext.tabIndex=0 modalNext.tabIndex = 0
modalNext.addEventListener('click',modalNextImage,true); modalNext.addEventListener('click', modalNextImage, true);
modalNext.addEventListener('keydown', modalKeyHandler, true) modalNext.addEventListener('keydown', modalKeyHandler, true)
modal.appendChild(modalNext) modal.appendChild(modalNext)
gradioApp().getRootNode().appendChild(modal) gradioApp().getRootNode().appendChild(modal)
document.body.appendChild(modalFragment); document.body.appendChild(modalFragment);
}); });
// code related to showing and updating progressbar shown as the image is being made // code related to showing and updating progressbar shown as the image is being made
global_progressbars = {} global_progressbars = {}
function check_progressbar(id_part, id_progressbar, id_progressbar_span, id_interrupt, id_preview, id_gallery){ function check_progressbar(id_part, id_progressbar, id_progressbar_span, id_skip, id_interrupt, id_preview, id_gallery){
var progressbar = gradioApp().getElementById(id_progressbar) var progressbar = gradioApp().getElementById(id_progressbar)
var skip = id_skip ? gradioApp().getElementById(id_skip) : null
var interrupt = gradioApp().getElementById(id_interrupt) var interrupt = gradioApp().getElementById(id_interrupt)
if(opts.show_progress_in_title && progressbar && progressbar.offsetParent){ if(opts.show_progress_in_title && progressbar && progressbar.offsetParent){
...@@ -32,30 +33,37 @@ function check_progressbar(id_part, id_progressbar, id_progressbar_span, id_inte ...@@ -32,30 +33,37 @@ function check_progressbar(id_part, id_progressbar, id_progressbar_span, id_inte
var progressDiv = gradioApp().querySelectorAll('#' + id_progressbar_span).length > 0; var progressDiv = gradioApp().querySelectorAll('#' + id_progressbar_span).length > 0;
if(!progressDiv){ if(!progressDiv){
if (skip) {
skip.style.display = "none"
}
interrupt.style.display = "none" interrupt.style.display = "none"
} }
} }
window.setTimeout(function(){ requestMoreProgress(id_part, id_progressbar_span, id_interrupt) }, 500) window.setTimeout(function() { requestMoreProgress(id_part, id_progressbar_span, id_skip, id_interrupt) }, 500)
}); });
mutationObserver.observe( progressbar, { childList:true, subtree:true }) mutationObserver.observe( progressbar, { childList:true, subtree:true })
} }
} }
onUiUpdate(function(){ onUiUpdate(function(){
check_progressbar('txt2img', 'txt2img_progressbar', 'txt2img_progress_span', 'txt2img_interrupt', 'txt2img_preview', 'txt2img_gallery') check_progressbar('txt2img', 'txt2img_progressbar', 'txt2img_progress_span', 'txt2img_skip', 'txt2img_interrupt', 'txt2img_preview', 'txt2img_gallery')
check_progressbar('img2img', 'img2img_progressbar', 'img2img_progress_span', 'img2img_interrupt', 'img2img_preview', 'img2img_gallery') check_progressbar('img2img', 'img2img_progressbar', 'img2img_progress_span', 'img2img_skip', 'img2img_interrupt', 'img2img_preview', 'img2img_gallery')
check_progressbar('ti', 'ti_progressbar', 'ti_progress_span', 'ti_interrupt', 'ti_preview', 'ti_gallery') check_progressbar('ti', 'ti_progressbar', 'ti_progress_span', '', 'ti_interrupt', 'ti_preview', 'ti_gallery')
}) })
function requestMoreProgress(id_part, id_progressbar_span, id_interrupt){ function requestMoreProgress(id_part, id_progressbar_span, id_skip, id_interrupt){
btn = gradioApp().getElementById(id_part+"_check_progress"); btn = gradioApp().getElementById(id_part+"_check_progress");
if(btn==null) return; if(btn==null) return;
btn.click(); btn.click();
var progressDiv = gradioApp().querySelectorAll('#' + id_progressbar_span).length > 0; var progressDiv = gradioApp().querySelectorAll('#' + id_progressbar_span).length > 0;
var skip = id_skip ? gradioApp().getElementById(id_skip) : null
var interrupt = gradioApp().getElementById(id_interrupt) var interrupt = gradioApp().getElementById(id_interrupt)
if(progressDiv && interrupt){ if(progressDiv && interrupt){
if (skip) {
skip.style.display = "block"
}
interrupt.style.display = "block" interrupt.style.display = "block"
} }
} }
......
...@@ -4,39 +4,17 @@ import os ...@@ -4,39 +4,17 @@ import os
import sys import sys
import importlib.util import importlib.util
import shlex import shlex
import platform
dir_repos = "repositories" dir_repos = "repositories"
dir_tmp = "tmp"
python = sys.executable python = sys.executable
git = os.environ.get('GIT', "git") git = os.environ.get('GIT', "git")
torch_command = os.environ.get('TORCH_COMMAND', "pip install torch==1.12.1+cu113 torchvision==0.13.1+cu113 --extra-index-url https://download.pytorch.org/whl/cu113")
requirements_file = os.environ.get('REQS_FILE', "requirements_versions.txt")
commandline_args = os.environ.get('COMMANDLINE_ARGS', "")
gfpgan_package = os.environ.get('GFPGAN_PACKAGE', "git+https://github.com/TencentARC/GFPGAN.git@8d2447a2d918f8eba5a4a01463fd48e45126a379")
clip_package = os.environ.get('CLIP_PACKAGE', "git+https://github.com/openai/CLIP.git@d50d76daa670286dd6cacf3bcd80b5e4823fc8e1")
stable_diffusion_commit_hash = os.environ.get('STABLE_DIFFUSION_COMMIT_HASH', "69ae4b35e0a0f6ee1af8bb9a5d0016ccb27e36dc")
taming_transformers_commit_hash = os.environ.get('TAMING_TRANSFORMERS_COMMIT_HASH', "24268930bf1dce879235a7fddd0b2355b84d7ea6")
k_diffusion_commit_hash = os.environ.get('K_DIFFUSION_COMMIT_HASH', "f4e99857772fc3a126ba886aadf795a332774878")
codeformer_commit_hash = os.environ.get('CODEFORMER_COMMIT_HASH', "c5b4593074ba6214284d6acd5f1719b6c5d739af")
blip_commit_hash = os.environ.get('BLIP_COMMIT_HASH', "48211a1594f1321b00f14c9f7a5b4813144b2fb9")
args = shlex.split(commandline_args)
def extract_arg(args, name): def extract_arg(args, name):
return [x for x in args if x != name], name in args return [x for x in args if x != name], name in args
args, skip_torch_cuda_test = extract_arg(args, '--skip-torch-cuda-test')
def repo_dir(name):
return os.path.join(dir_repos, name)
def run(command, desc=None, errdesc=None): def run(command, desc=None, errdesc=None):
if desc is not None: if desc is not None:
print(desc) print(desc)
...@@ -56,23 +34,11 @@ stderr: {result.stderr.decode(encoding="utf8", errors="ignore") if len(result.st ...@@ -56,23 +34,11 @@ stderr: {result.stderr.decode(encoding="utf8", errors="ignore") if len(result.st
return result.stdout.decode(encoding="utf8", errors="ignore") return result.stdout.decode(encoding="utf8", errors="ignore")
def run_python(code, desc=None, errdesc=None):
return run(f'"{python}" -c "{code}"', desc, errdesc)
def run_pip(args, desc=None):
return run(f'"{python}" -m pip {args} --prefer-binary', desc=f"Installing {desc}", errdesc=f"Couldn't install {desc}")
def check_run(command): def check_run(command):
result = subprocess.run(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True) result = subprocess.run(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)
return result.returncode == 0 return result.returncode == 0
def check_run_python(code):
return check_run(f'"{python}" -c "{code}"')
def is_installed(package): def is_installed(package):
try: try:
spec = importlib.util.find_spec(package) spec = importlib.util.find_spec(package)
...@@ -82,6 +48,22 @@ def is_installed(package): ...@@ -82,6 +48,22 @@ def is_installed(package):
return spec is not None return spec is not None
def repo_dir(name):
return os.path.join(dir_repos, name)
def run_python(code, desc=None, errdesc=None):
return run(f'"{python}" -c "{code}"', desc, errdesc)
def run_pip(args, desc=None):
return run(f'"{python}" -m pip {args} --prefer-binary', desc=f"Installing {desc}", errdesc=f"Couldn't install {desc}")
def check_run_python(code):
return check_run(f'"{python}" -c "{code}"')
def git_clone(url, dir, name, commithash=None): def git_clone(url, dir, name, commithash=None):
# TODO clone into temporary dir and move if successful # TODO clone into temporary dir and move if successful
...@@ -103,50 +85,81 @@ def git_clone(url, dir, name, commithash=None): ...@@ -103,50 +85,81 @@ def git_clone(url, dir, name, commithash=None):
run(f'"{git}" -C {dir} checkout {commithash}', None, "Couldn't checkout {name}'s hash: {commithash}") run(f'"{git}" -C {dir} checkout {commithash}', None, "Couldn't checkout {name}'s hash: {commithash}")
try: def prepare_enviroment():
commit = run(f"{git} rev-parse HEAD").strip() torch_command = os.environ.get('TORCH_COMMAND', "pip install torch==1.12.1+cu113 torchvision==0.13.1+cu113 --extra-index-url https://download.pytorch.org/whl/cu113")
except Exception: requirements_file = os.environ.get('REQS_FILE', "requirements_versions.txt")
commit = "<none>" commandline_args = os.environ.get('COMMANDLINE_ARGS', "")
gfpgan_package = os.environ.get('GFPGAN_PACKAGE', "git+https://github.com/TencentARC/GFPGAN.git@8d2447a2d918f8eba5a4a01463fd48e45126a379")
clip_package = os.environ.get('CLIP_PACKAGE', "git+https://github.com/openai/CLIP.git@d50d76daa670286dd6cacf3bcd80b5e4823fc8e1")
print(f"Python {sys.version}") stable_diffusion_commit_hash = os.environ.get('STABLE_DIFFUSION_COMMIT_HASH', "69ae4b35e0a0f6ee1af8bb9a5d0016ccb27e36dc")
print(f"Commit hash: {commit}") taming_transformers_commit_hash = os.environ.get('TAMING_TRANSFORMERS_COMMIT_HASH', "24268930bf1dce879235a7fddd0b2355b84d7ea6")
k_diffusion_commit_hash = os.environ.get('K_DIFFUSION_COMMIT_HASH', "f4e99857772fc3a126ba886aadf795a332774878")
codeformer_commit_hash = os.environ.get('CODEFORMER_COMMIT_HASH', "c5b4593074ba6214284d6acd5f1719b6c5d739af")
blip_commit_hash = os.environ.get('BLIP_COMMIT_HASH', "48211a1594f1321b00f14c9f7a5b4813144b2fb9")
args = shlex.split(commandline_args)
args, skip_torch_cuda_test = extract_arg(args, '--skip-torch-cuda-test')
xformers = '--xformers' in args
deepdanbooru = '--deepdanbooru' in args
try:
commit = run(f"{git} rev-parse HEAD").strip()
except Exception:
commit = "<none>"
if not is_installed("torch") or not is_installed("torchvision"): print(f"Python {sys.version}")
run(f'"{python}" -m {torch_command}', "Installing torch and torchvision", "Couldn't install torch") print(f"Commit hash: {commit}")
if not skip_torch_cuda_test: if not is_installed("torch") or not is_installed("torchvision"):
run_python("import torch; assert torch.cuda.is_available(), 'Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check'") run(f'"{python}" -m {torch_command}', "Installing torch and torchvision", "Couldn't install torch")
if not is_installed("gfpgan"): if not skip_torch_cuda_test:
run_pip(f"install {gfpgan_package}", "gfpgan") run_python("import torch; assert torch.cuda.is_available(), 'Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check'")
if not is_installed("clip"): if not is_installed("gfpgan"):
run_pip(f"install {clip_package}", "clip") run_pip(f"install {gfpgan_package}", "gfpgan")
os.makedirs(dir_repos, exist_ok=True) if not is_installed("clip"):
run_pip(f"install {clip_package}", "clip")
git_clone("https://github.com/CompVis/stable-diffusion.git", repo_dir('stable-diffusion'), "Stable Diffusion", stable_diffusion_commit_hash) if not is_installed("xformers") and xformers and platform.python_version().startswith("3.10"):
git_clone("https://github.com/CompVis/taming-transformers.git", repo_dir('taming-transformers'), "Taming Transformers", taming_transformers_commit_hash) if platform.system() == "Windows":
git_clone("https://github.com/crowsonkb/k-diffusion.git", repo_dir('k-diffusion'), "K-diffusion", k_diffusion_commit_hash) run_pip("install https://github.com/C43H66N12O12S2/stable-diffusion-webui/releases/download/c/xformers-0.0.14.dev0-cp310-cp310-win_amd64.whl", "xformers")
git_clone("https://github.com/sczhou/CodeFormer.git", repo_dir('CodeFormer'), "CodeFormer", codeformer_commit_hash) elif platform.system() == "Linux":
git_clone("https://github.com/salesforce/BLIP.git", repo_dir('BLIP'), "BLIP", blip_commit_hash) run_pip("install xformers", "xformers")
if not is_installed("lpips"): if not is_installed("deepdanbooru") and deepdanbooru:
run_pip(f"install -r {os.path.join(repo_dir('CodeFormer'), 'requirements.txt')}", "requirements for CodeFormer") run_pip("install git+https://github.com/KichangKim/DeepDanbooru.git@edf73df4cdaeea2cf00e9ac08bd8a9026b7a7b26#egg=deepdanbooru[tensorflow] tensorflow==2.10.0 tensorflow-io==0.27.0", "deepdanbooru")
run_pip(f"install -r {requirements_file}", "requirements for Web UI") os.makedirs(dir_repos, exist_ok=True)
sys.argv += args git_clone("https://github.com/CompVis/stable-diffusion.git", repo_dir('stable-diffusion'), "Stable Diffusion", stable_diffusion_commit_hash)
git_clone("https://github.com/CompVis/taming-transformers.git", repo_dir('taming-transformers'), "Taming Transformers", taming_transformers_commit_hash)
git_clone("https://github.com/crowsonkb/k-diffusion.git", repo_dir('k-diffusion'), "K-diffusion", k_diffusion_commit_hash)
git_clone("https://github.com/sczhou/CodeFormer.git", repo_dir('CodeFormer'), "CodeFormer", codeformer_commit_hash)
git_clone("https://github.com/salesforce/BLIP.git", repo_dir('BLIP'), "BLIP", blip_commit_hash)
if not is_installed("lpips"):
run_pip(f"install -r {os.path.join(repo_dir('CodeFormer'), 'requirements.txt')}", "requirements for CodeFormer")
run_pip(f"install -r {requirements_file}", "requirements for Web UI")
sys.argv += args
if "--exit" in args:
print("Exiting because of --exit argument")
exit(0)
if "--exit" in args:
print("Exiting because of --exit argument")
exit(0)
def start_webui(): def start_webui():
print(f"Launching Web UI with arguments: {' '.join(sys.argv[1:])}") print(f"Launching Web UI with arguments: {' '.join(sys.argv[1:])}")
import webui import webui
webui.webui() webui.webui()
if __name__ == "__main__": if __name__ == "__main__":
prepare_enviroment()
start_webui() start_webui()
...@@ -10,13 +10,11 @@ from basicsr.utils.download_util import load_file_from_url ...@@ -10,13 +10,11 @@ from basicsr.utils.download_util import load_file_from_url
import modules.upscaler import modules.upscaler
from modules import devices, modelloader from modules import devices, modelloader
from modules.bsrgan_model_arch import RRDBNet from modules.bsrgan_model_arch import RRDBNet
from modules.paths import models_path
class UpscalerBSRGAN(modules.upscaler.Upscaler): class UpscalerBSRGAN(modules.upscaler.Upscaler):
def __init__(self, dirname): def __init__(self, dirname):
self.name = "BSRGAN" self.name = "BSRGAN"
self.model_path = os.path.join(models_path, self.name)
self.model_name = "BSRGAN 4x" self.model_name = "BSRGAN 4x"
self.model_url = "https://github.com/cszn/KAIR/releases/download/v1.0/BSRGAN.pth" self.model_url = "https://github.com/cszn/KAIR/releases/download/v1.0/BSRGAN.pth"
self.user_path = dirname self.user_path = dirname
......
import os.path
from concurrent.futures import ProcessPoolExecutor
from multiprocessing import get_context
def _load_tf_and_return_tags(pil_image, threshold):
import deepdanbooru as dd
import tensorflow as tf
import numpy as np
this_folder = os.path.dirname(__file__)
model_path = os.path.abspath(os.path.join(this_folder, '..', 'models', 'deepbooru'))
if not os.path.exists(os.path.join(model_path, 'project.json')):
# there is no point importing these every time
import zipfile
from basicsr.utils.download_util import load_file_from_url
load_file_from_url(r"https://github.com/KichangKim/DeepDanbooru/releases/download/v3-20211112-sgd-e28/deepdanbooru-v3-20211112-sgd-e28.zip",
model_path)
with zipfile.ZipFile(os.path.join(model_path, "deepdanbooru-v3-20211112-sgd-e28.zip"), "r") as zip_ref:
zip_ref.extractall(model_path)
os.remove(os.path.join(model_path, "deepdanbooru-v3-20211112-sgd-e28.zip"))
tags = dd.project.load_tags_from_project(model_path)
model = dd.project.load_model_from_project(
model_path, compile_model=True
)
width = model.input_shape[2]
height = model.input_shape[1]
image = np.array(pil_image)
image = tf.image.resize(
image,
size=(height, width),
method=tf.image.ResizeMethod.AREA,
preserve_aspect_ratio=True,
)
image = image.numpy() # EagerTensor to np.array
image = dd.image.transform_and_pad_image(image, width, height)
image = image / 255.0
image_shape = image.shape
image = image.reshape((1, image_shape[0], image_shape[1], image_shape[2]))
y = model.predict(image)[0]
result_dict = {}
for i, tag in enumerate(tags):
result_dict[tag] = y[i]
result_tags_out = []
result_tags_print = []
for tag in tags:
if result_dict[tag] >= threshold:
if tag.startswith("rating:"):
continue
result_tags_out.append(tag)
result_tags_print.append(f'{result_dict[tag]} {tag}')
print('\n'.join(sorted(result_tags_print, reverse=True)))
return ', '.join(result_tags_out).replace('_', ' ').replace(':', ' ')
def subprocess_init_no_cuda():
import os
os.environ["CUDA_VISIBLE_DEVICES"] = "-1"
def get_deepbooru_tags(pil_image, threshold=0.5):
context = get_context('spawn')
with ProcessPoolExecutor(initializer=subprocess_init_no_cuda, mp_context=context) as executor:
f = executor.submit(_load_tf_and_return_tags, pil_image, threshold, )
ret = f.result() # will rethrow any exceptions
return ret
\ No newline at end of file
...@@ -36,6 +36,7 @@ errors.run(enable_tf32, "Enabling TF32") ...@@ -36,6 +36,7 @@ errors.run(enable_tf32, "Enabling TF32")
device = device_gfpgan = device_bsrgan = device_esrgan = device_scunet = device_codeformer = get_optimal_device() device = device_gfpgan = device_bsrgan = device_esrgan = device_scunet = device_codeformer = get_optimal_device()
dtype = torch.float16 dtype = torch.float16
dtype_vae = torch.float16
def randn(seed, shape): def randn(seed, shape):
# Pytorch currently doesn't handle setting randomness correctly when the metal backend is used. # Pytorch currently doesn't handle setting randomness correctly when the metal backend is used.
...@@ -59,9 +60,12 @@ def randn_without_seed(shape): ...@@ -59,9 +60,12 @@ def randn_without_seed(shape):
return torch.randn(shape, device=device) return torch.randn(shape, device=device)
def autocast(): def autocast(disable=False):
from modules import shared from modules import shared
if disable:
return contextlib.nullcontext()
if dtype == torch.float32 or shared.cmd_opts.precision == "full": if dtype == torch.float32 or shared.cmd_opts.precision == "full":
return contextlib.nullcontext() return contextlib.nullcontext()
......
...@@ -5,9 +5,8 @@ import torch ...@@ -5,9 +5,8 @@ import torch
from PIL import Image from PIL import Image
from basicsr.utils.download_util import load_file_from_url from basicsr.utils.download_util import load_file_from_url
import modules.esrgam_model_arch as arch import modules.esrgan_model_arch as arch
from modules import shared, modelloader, images, devices from modules import shared, modelloader, images, devices
from modules.paths import models_path
from modules.upscaler import Upscaler, UpscalerData from modules.upscaler import Upscaler, UpscalerData
from modules.shared import opts from modules.shared import opts
...@@ -76,7 +75,6 @@ class UpscalerESRGAN(Upscaler): ...@@ -76,7 +75,6 @@ class UpscalerESRGAN(Upscaler):
self.model_name = "ESRGAN_4x" self.model_name = "ESRGAN_4x"
self.scalers = [] self.scalers = []
self.user_path = dirname self.user_path = dirname
self.model_path = os.path.join(models_path, self.name)
super().__init__() super().__init__()
model_paths = self.find_models(ext_filter=[".pt", ".pth"]) model_paths = self.find_models(ext_filter=[".pt", ".pth"])
scalers = [] scalers = []
...@@ -111,7 +109,7 @@ class UpscalerESRGAN(Upscaler): ...@@ -111,7 +109,7 @@ class UpscalerESRGAN(Upscaler):
print("Unable to load %s from %s" % (self.model_path, filename)) print("Unable to load %s from %s" % (self.model_path, filename))
return None return None
pretrained_net = torch.load(filename, map_location='cpu' if shared.device.type == 'mps' else None) pretrained_net = torch.load(filename, map_location='cpu' if devices.device_esrgan.type == 'mps' else None)
crt_model = arch.RRDBNet(3, 3, 64, 23, gc=32) crt_model = arch.RRDBNet(3, 3, 64, 23, gc=32)
pretrained_net = fix_model_layers(crt_model, pretrained_net) pretrained_net = fix_model_layers(crt_model, pretrained_net)
......
...@@ -29,7 +29,7 @@ def run_extras(extras_mode, image, image_folder, gfpgan_visibility, codeformer_v ...@@ -29,7 +29,7 @@ def run_extras(extras_mode, image, image_folder, gfpgan_visibility, codeformer_v
if extras_mode == 1: if extras_mode == 1:
#convert file to pillow image #convert file to pillow image
for img in image_folder: for img in image_folder:
image = Image.fromarray(np.array(Image.open(img))) image = Image.open(img)
imageArr.append(image) imageArr.append(image)
imageNameArr.append(os.path.splitext(img.orig_name)[0]) imageNameArr.append(os.path.splitext(img.orig_name)[0])
else: else:
...@@ -98,6 +98,10 @@ def run_extras(extras_mode, image, image_folder, gfpgan_visibility, codeformer_v ...@@ -98,6 +98,10 @@ def run_extras(extras_mode, image, image_folder, gfpgan_visibility, codeformer_v
no_prompt=True, grid=False, pnginfo_section_name="extras", existing_info=existing_pnginfo, no_prompt=True, grid=False, pnginfo_section_name="extras", existing_info=existing_pnginfo,
forced_filename=image_name if opts.use_original_name_batch else None) forced_filename=image_name if opts.use_original_name_batch else None)
if opts.enable_pnginfo:
image.info = existing_pnginfo
image.info["extras"] = info
outputs.append(image) outputs.append(image)
devices.torch_gc() devices.torch_gc()
...@@ -169,9 +173,9 @@ def run_modelmerger(primary_model_name, secondary_model_name, interp_method, int ...@@ -169,9 +173,9 @@ def run_modelmerger(primary_model_name, secondary_model_name, interp_method, int
print(f"Loading {secondary_model_info.filename}...") print(f"Loading {secondary_model_info.filename}...")
secondary_model = torch.load(secondary_model_info.filename, map_location='cpu') secondary_model = torch.load(secondary_model_info.filename, map_location='cpu')
theta_0 = primary_model['state_dict'] theta_0 = sd_models.get_state_dict_from_checkpoint(primary_model)
theta_1 = secondary_model['state_dict'] theta_1 = sd_models.get_state_dict_from_checkpoint(secondary_model)
theta_funcs = { theta_funcs = {
"Weighted Sum": weighted_sum, "Weighted Sum": weighted_sum,
......
import glob
import os
import sys
import traceback
import torch
from ldm.util import default
from modules import devices, shared
import torch
from torch import einsum
from einops import rearrange, repeat
class HypernetworkModule(torch.nn.Module):
def __init__(self, dim, state_dict):
super().__init__()
self.linear1 = torch.nn.Linear(dim, dim * 2)
self.linear2 = torch.nn.Linear(dim * 2, dim)
self.load_state_dict(state_dict, strict=True)
self.to(devices.device)
def forward(self, x):
return x + (self.linear2(self.linear1(x)))
class Hypernetwork:
filename = None
name = None
def __init__(self, filename):
self.filename = filename
self.name = os.path.splitext(os.path.basename(filename))[0]
self.layers = {}
state_dict = torch.load(filename, map_location='cpu')
for size, sd in state_dict.items():
self.layers[size] = (HypernetworkModule(size, sd[0]), HypernetworkModule(size, sd[1]))
def list_hypernetworks(path):
res = {}
for filename in glob.iglob(os.path.join(path, '**/*.pt'), recursive=True):
name = os.path.splitext(os.path.basename(filename))[0]
res[name] = filename
return res
def load_hypernetwork(filename):
path = shared.hypernetworks.get(filename, None)
if path is not None:
print(f"Loading hypernetwork {filename}")
try:
shared.loaded_hypernetwork = Hypernetwork(path)
except Exception:
print(f"Error loading hypernetwork {path}", file=sys.stderr)
print(traceback.format_exc(), file=sys.stderr)
else:
if shared.loaded_hypernetwork is not None:
print(f"Unloading hypernetwork")
shared.loaded_hypernetwork = None
def apply_hypernetwork(hypernetwork, context):
hypernetwork_layers = (hypernetwork.layers if hypernetwork is not None else {}).get(context.shape[2], None)
if hypernetwork_layers is None:
return context, context
context_k = hypernetwork_layers[0](context)
context_v = hypernetwork_layers[1](context)
return context_k, context_v
def attention_CrossAttention_forward(self, x, context=None, mask=None):
h = self.heads
q = self.to_q(x)
context = default(context, x)
context_k, context_v = apply_hypernetwork(shared.loaded_hypernetwork, context)
k = self.to_k(context_k)
v = self.to_v(context_v)
q, k, v = map(lambda t: rearrange(t, 'b n (h d) -> (b h) n d', h=h), (q, k, v))
sim = einsum('b i d, b j d -> b i j', q, k) * self.scale
if mask is not None:
mask = rearrange(mask, 'b ... -> b (...)')
max_neg_value = -torch.finfo(sim.dtype).max
mask = repeat(mask, 'b j -> (b h) () j', h=h)
sim.masked_fill_(~mask, max_neg_value)
# attention, what we cannot get enough of
attn = sim.softmax(dim=-1)
out = einsum('b i j, b j d -> b i d', attn, v)
out = rearrange(out, '(b h) n d -> b n (h d)', h=h)
return self.to_out(out)
...@@ -349,6 +349,38 @@ def get_next_sequence_number(path, basename): ...@@ -349,6 +349,38 @@ def get_next_sequence_number(path, basename):
def save_image(image, path, basename, seed=None, prompt=None, extension='png', info=None, short_filename=False, no_prompt=False, grid=False, pnginfo_section_name='parameters', p=None, existing_info=None, forced_filename=None, suffix="", save_to_dirs=None): def save_image(image, path, basename, seed=None, prompt=None, extension='png', info=None, short_filename=False, no_prompt=False, grid=False, pnginfo_section_name='parameters', p=None, existing_info=None, forced_filename=None, suffix="", save_to_dirs=None):
'''Save an image.
Args:
image (`PIL.Image`):
The image to be saved.
path (`str`):
The directory to save the image. Note, the option `save_to_dirs` will make the image to be saved into a sub directory.
basename (`str`):
The base filename which will be applied to `filename pattern`.
seed, prompt, short_filename,
extension (`str`):
Image file extension, default is `png`.
pngsectionname (`str`):
Specify the name of the section which `info` will be saved in.
info (`str` or `PngImagePlugin.iTXt`):
PNG info chunks.
existing_info (`dict`):
Additional PNG info. `existing_info == {pngsectionname: info, ...}`
no_prompt:
TODO I don't know its meaning.
p (`StableDiffusionProcessing`)
forced_filename (`str`):
If specified, `basename` and filename pattern will be ignored.
save_to_dirs (bool):
If true, the image will be saved into a subdirectory of `path`.
Returns: (fullfn, txt_fullfn)
fullfn (`str`):
The full path of the saved imaged.
txt_fullfn (`str` or None):
If a text file is saved for this image, this will be its full path. Otherwise None.
'''
if short_filename or prompt is None or seed is None: if short_filename or prompt is None or seed is None:
file_decoration = "" file_decoration = ""
elif opts.save_to_dirs: elif opts.save_to_dirs:
...@@ -424,7 +456,10 @@ def save_image(image, path, basename, seed=None, prompt=None, extension='png', i ...@@ -424,7 +456,10 @@ def save_image(image, path, basename, seed=None, prompt=None, extension='png', i
piexif.insert(exif_bytes(), fullfn_without_extension + ".jpg") piexif.insert(exif_bytes(), fullfn_without_extension + ".jpg")
if opts.save_txt and info is not None: if opts.save_txt and info is not None:
with open(f"{fullfn_without_extension}.txt", "w", encoding="utf8") as file: txt_fullfn = f"{fullfn_without_extension}.txt"
with open(txt_fullfn, "w", encoding="utf8") as file:
file.write(info + "\n") file.write(info + "\n")
else:
txt_fullfn = None
return fullfn return fullfn, txt_fullfn
...@@ -32,6 +32,8 @@ def process_batch(p, input_dir, output_dir, args): ...@@ -32,6 +32,8 @@ def process_batch(p, input_dir, output_dir, args):
for i, image in enumerate(images): for i, image in enumerate(images):
state.job = f"{i+1} out of {len(images)}" state.job = f"{i+1} out of {len(images)}"
if state.skipped:
state.skipped = False
if state.interrupted: if state.interrupted:
break break
......
...@@ -140,11 +140,11 @@ class InterrogateModels: ...@@ -140,11 +140,11 @@ class InterrogateModels:
res = caption res = caption
cilp_image = self.clip_preprocess(pil_image).unsqueeze(0).type(self.dtype).to(shared.device) clip_image = self.clip_preprocess(pil_image).unsqueeze(0).type(self.dtype).to(shared.device)
precision_scope = torch.autocast if shared.cmd_opts.precision == "autocast" else contextlib.nullcontext precision_scope = torch.autocast if shared.cmd_opts.precision == "autocast" else contextlib.nullcontext
with torch.no_grad(), precision_scope("cuda"): with torch.no_grad(), precision_scope("cuda"):
image_features = self.clip_model.encode_image(cilp_image).type(self.dtype) image_features = self.clip_model.encode_image(clip_image).type(self.dtype)
image_features /= image_features.norm(dim=-1, keepdim=True) image_features /= image_features.norm(dim=-1, keepdim=True)
......
...@@ -7,13 +7,11 @@ from basicsr.utils.download_util import load_file_from_url ...@@ -7,13 +7,11 @@ from basicsr.utils.download_util import load_file_from_url
from modules.upscaler import Upscaler, UpscalerData from modules.upscaler import Upscaler, UpscalerData
from modules.ldsr_model_arch import LDSR from modules.ldsr_model_arch import LDSR
from modules import shared from modules import shared
from modules.paths import models_path
class UpscalerLDSR(Upscaler): class UpscalerLDSR(Upscaler):
def __init__(self, user_path): def __init__(self, user_path):
self.name = "LDSR" self.name = "LDSR"
self.model_path = os.path.join(models_path, self.name)
self.user_path = user_path self.user_path = user_path
self.model_url = "https://heibox.uni-heidelberg.de/f/578df07c8fc04ffbadf3/?dl=1" self.model_url = "https://heibox.uni-heidelberg.de/f/578df07c8fc04ffbadf3/?dl=1"
self.yaml_url = "https://heibox.uni-heidelberg.de/f/31a76b13ea27482981b4/?dl=1" self.yaml_url = "https://heibox.uni-heidelberg.de/f/31a76b13ea27482981b4/?dl=1"
......
import argparse import argparse
import os import os
import sys import sys
import modules.safe
script_path = os.path.dirname(os.path.dirname(os.path.realpath(__file__))) script_path = os.path.dirname(os.path.dirname(os.path.realpath(__file__)))
models_path = os.path.join(script_path, "models") models_path = os.path.join(script_path, "models")
...@@ -12,6 +13,7 @@ possible_sd_paths = [os.path.join(script_path, 'repositories/stable-diffusion'), ...@@ -12,6 +13,7 @@ possible_sd_paths = [os.path.join(script_path, 'repositories/stable-diffusion'),
for possible_sd_path in possible_sd_paths: for possible_sd_path in possible_sd_paths:
if os.path.exists(os.path.join(possible_sd_path, 'ldm/models/diffusion/ddpm.py')): if os.path.exists(os.path.join(possible_sd_path, 'ldm/models/diffusion/ddpm.py')):
sd_path = os.path.abspath(possible_sd_path) sd_path = os.path.abspath(possible_sd_path)
break
assert sd_path is not None, "Couldn't find Stable Diffusion in any of: " + str(possible_sd_paths) assert sd_path is not None, "Couldn't find Stable Diffusion in any of: " + str(possible_sd_paths)
......
...@@ -46,6 +46,12 @@ def apply_color_correction(correction, image): ...@@ -46,6 +46,12 @@ def apply_color_correction(correction, image):
return image return image
def get_correct_sampler(p):
if isinstance(p, modules.processing.StableDiffusionProcessingTxt2Img):
return sd_samplers.samplers
elif isinstance(p, modules.processing.StableDiffusionProcessingImg2Img):
return sd_samplers.samplers_for_img2img
class StableDiffusionProcessing: class StableDiffusionProcessing:
def __init__(self, sd_model=None, outpath_samples=None, outpath_grids=None, prompt="", styles=None, seed=-1, subseed=-1, subseed_strength=0, seed_resize_from_h=-1, seed_resize_from_w=-1, seed_enable_extras=True, sampler_index=0, batch_size=1, n_iter=1, steps=50, cfg_scale=7.0, width=512, height=512, restore_faces=False, tiling=False, do_not_save_samples=False, do_not_save_grid=False, extra_generation_params=None, overlay_images=None, negative_prompt=None, eta=None): def __init__(self, sd_model=None, outpath_samples=None, outpath_grids=None, prompt="", styles=None, seed=-1, subseed=-1, subseed_strength=0, seed_resize_from_h=-1, seed_resize_from_w=-1, seed_enable_extras=True, sampler_index=0, batch_size=1, n_iter=1, steps=50, cfg_scale=7.0, width=512, height=512, restore_faces=False, tiling=False, do_not_save_samples=False, do_not_save_grid=False, extra_generation_params=None, overlay_images=None, negative_prompt=None, eta=None):
self.sd_model = sd_model self.sd_model = sd_model
...@@ -123,6 +129,7 @@ class Processed: ...@@ -123,6 +129,7 @@ class Processed:
self.index_of_first_image = index_of_first_image self.index_of_first_image = index_of_first_image
self.styles = p.styles self.styles = p.styles
self.job_timestamp = state.job_timestamp self.job_timestamp = state.job_timestamp
self.clip_skip = opts.CLIP_stop_at_last_layers
self.eta = p.eta self.eta = p.eta
self.ddim_discretize = p.ddim_discretize self.ddim_discretize = p.ddim_discretize
...@@ -169,6 +176,7 @@ class Processed: ...@@ -169,6 +176,7 @@ class Processed:
"infotexts": self.infotexts, "infotexts": self.infotexts,
"styles": self.styles, "styles": self.styles,
"job_timestamp": self.job_timestamp, "job_timestamp": self.job_timestamp,
"clip_skip": self.clip_skip,
} }
return json.dumps(obj) return json.dumps(obj)
...@@ -199,7 +207,7 @@ def create_random_tensors(shape, seeds, subseeds=None, subseed_strength=0.0, see ...@@ -199,7 +207,7 @@ def create_random_tensors(shape, seeds, subseeds=None, subseed_strength=0.0, see
# enables the generation of additional tensors with noise that the sampler will use during its processing. # enables the generation of additional tensors with noise that the sampler will use during its processing.
# Using those pre-generated tensors instead of simple torch.randn allows a batch with seeds [100, 101] to # Using those pre-generated tensors instead of simple torch.randn allows a batch with seeds [100, 101] to
# produce the same images as with two batches [100], [101]. # produce the same images as with two batches [100], [101].
if p is not None and p.sampler is not None and len(seeds) > 1 and opts.enable_batch_seeds: if p is not None and p.sampler is not None and (len(seeds) > 1 and opts.enable_batch_seeds or opts.eta_noise_seed_delta > 0):
sampler_noises = [[] for _ in range(p.sampler.number_of_needed_noises(p))] sampler_noises = [[] for _ in range(p.sampler.number_of_needed_noises(p))]
else: else:
sampler_noises = None sampler_noises = None
...@@ -239,6 +247,9 @@ def create_random_tensors(shape, seeds, subseeds=None, subseed_strength=0.0, see ...@@ -239,6 +247,9 @@ def create_random_tensors(shape, seeds, subseeds=None, subseed_strength=0.0, see
if sampler_noises is not None: if sampler_noises is not None:
cnt = p.sampler.number_of_needed_noises(p) cnt = p.sampler.number_of_needed_noises(p)
if opts.eta_noise_seed_delta > 0:
torch.manual_seed(seed + opts.eta_noise_seed_delta)
for j in range(cnt): for j in range(cnt):
sampler_noises[j].append(devices.randn_without_seed(tuple(noise_shape))) sampler_noises[j].append(devices.randn_without_seed(tuple(noise_shape)))
...@@ -251,6 +262,13 @@ def create_random_tensors(shape, seeds, subseeds=None, subseed_strength=0.0, see ...@@ -251,6 +262,13 @@ def create_random_tensors(shape, seeds, subseeds=None, subseed_strength=0.0, see
return x return x
def decode_first_stage(model, x):
with devices.autocast(disable=x.dtype == devices.dtype_vae):
x = model.decode_first_stage(x)
return x
def get_fixed_seed(seed): def get_fixed_seed(seed):
if seed is None or seed == '' or seed == -1: if seed is None or seed == '' or seed == -1:
return int(random.randrange(4294967294)) return int(random.randrange(4294967294))
...@@ -266,14 +284,18 @@ def fix_seed(p): ...@@ -266,14 +284,18 @@ def fix_seed(p):
def create_infotext(p, all_prompts, all_seeds, all_subseeds, comments, iteration=0, position_in_batch=0): def create_infotext(p, all_prompts, all_seeds, all_subseeds, comments, iteration=0, position_in_batch=0):
index = position_in_batch + iteration * p.batch_size index = position_in_batch + iteration * p.batch_size
clip_skip = getattr(p, 'clip_skip', opts.CLIP_stop_at_last_layers)
generation_params = { generation_params = {
"Steps": p.steps, "Steps": p.steps,
"Sampler": sd_samplers.samplers[p.sampler_index].name, "Sampler": get_correct_sampler(p)[p.sampler_index].name,
"CFG scale": p.cfg_scale, "CFG scale": p.cfg_scale,
"Seed": all_seeds[index], "Seed": all_seeds[index],
"Face restoration": (opts.face_restoration_model if p.restore_faces else None), "Face restoration": (opts.face_restoration_model if p.restore_faces else None),
"Size": f"{p.width}x{p.height}", "Size": f"{p.width}x{p.height}",
"Model hash": getattr(p, 'sd_model_hash', None if not opts.add_model_hash_to_info or not shared.sd_model.sd_model_hash else shared.sd_model.sd_model_hash), "Model hash": getattr(p, 'sd_model_hash', None if not opts.add_model_hash_to_info or not shared.sd_model.sd_model_hash else shared.sd_model.sd_model_hash),
"Model": (None if not opts.add_model_name_to_info or not shared.sd_model.sd_checkpoint_info.model_name else shared.sd_model.sd_checkpoint_info.model_name.replace(',', '').replace(':', '')),
"Hypernet": (None if shared.loaded_hypernetwork is None else shared.loaded_hypernetwork.name.replace(',', '').replace(':', '')),
"Batch size": (None if p.batch_size < 2 else p.batch_size), "Batch size": (None if p.batch_size < 2 else p.batch_size),
"Batch pos": (None if p.batch_size < 2 else position_in_batch), "Batch pos": (None if p.batch_size < 2 else position_in_batch),
"Variation seed": (None if p.subseed_strength == 0 else all_subseeds[index]), "Variation seed": (None if p.subseed_strength == 0 else all_subseeds[index]),
...@@ -281,6 +303,8 @@ def create_infotext(p, all_prompts, all_seeds, all_subseeds, comments, iteration ...@@ -281,6 +303,8 @@ def create_infotext(p, all_prompts, all_seeds, all_subseeds, comments, iteration
"Seed resize from": (None if p.seed_resize_from_w == 0 or p.seed_resize_from_h == 0 else f"{p.seed_resize_from_w}x{p.seed_resize_from_h}"), "Seed resize from": (None if p.seed_resize_from_w == 0 or p.seed_resize_from_h == 0 else f"{p.seed_resize_from_w}x{p.seed_resize_from_h}"),
"Denoising strength": getattr(p, 'denoising_strength', None), "Denoising strength": getattr(p, 'denoising_strength', None),
"Eta": (None if p.sampler is None or p.sampler.eta == p.sampler.default_eta else p.sampler.eta), "Eta": (None if p.sampler is None or p.sampler.eta == p.sampler.default_eta else p.sampler.eta),
"Clip skip": None if clip_skip <= 1 else clip_skip,
"ENSD": None if opts.eta_noise_seed_delta == 0 else opts.eta_noise_seed_delta,
} }
generation_params.update(p.extra_generation_params) generation_params.update(p.extra_generation_params)
...@@ -312,6 +336,7 @@ def process_images(p: StableDiffusionProcessing) -> Processed: ...@@ -312,6 +336,7 @@ def process_images(p: StableDiffusionProcessing) -> Processed:
os.makedirs(p.outpath_grids, exist_ok=True) os.makedirs(p.outpath_grids, exist_ok=True)
modules.sd_hijack.model_hijack.apply_circular(p.tiling) modules.sd_hijack.model_hijack.apply_circular(p.tiling)
modules.sd_hijack.model_hijack.clear_comments()
comments = {} comments = {}
...@@ -341,7 +366,7 @@ def process_images(p: StableDiffusionProcessing) -> Processed: ...@@ -341,7 +366,7 @@ def process_images(p: StableDiffusionProcessing) -> Processed:
infotexts = [] infotexts = []
output_images = [] output_images = []
with torch.no_grad(): with torch.no_grad(), p.sd_model.ema_scope():
with devices.autocast(): with devices.autocast():
p.init(all_prompts, all_seeds, all_subseeds) p.init(all_prompts, all_seeds, all_subseeds)
...@@ -349,6 +374,9 @@ def process_images(p: StableDiffusionProcessing) -> Processed: ...@@ -349,6 +374,9 @@ def process_images(p: StableDiffusionProcessing) -> Processed:
state.job_count = p.n_iter state.job_count = p.n_iter
for n in range(p.n_iter): for n in range(p.n_iter):
if state.skipped:
state.skipped = False
if state.interrupted: if state.interrupted:
break break
...@@ -375,15 +403,14 @@ def process_images(p: StableDiffusionProcessing) -> Processed: ...@@ -375,15 +403,14 @@ def process_images(p: StableDiffusionProcessing) -> Processed:
with devices.autocast(): with devices.autocast():
samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength) samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength)
if state.interrupted: if state.interrupted or state.skipped:
# if we are interruped, sample returns just noise # if we are interrupted, sample returns just noise
# use the image collected previously in sampler loop # use the image collected previously in sampler loop
samples_ddim = shared.state.current_latent samples_ddim = shared.state.current_latent
samples_ddim = samples_ddim.to(devices.dtype) samples_ddim = samples_ddim.to(devices.dtype_vae)
x_samples_ddim = decode_first_stage(p.sd_model, samples_ddim)
x_samples_ddim = p.sd_model.decode_first_stage(samples_ddim)
x_samples_ddim = torch.clamp((x_samples_ddim + 1.0) / 2.0, min=0.0, max=1.0) x_samples_ddim = torch.clamp((x_samples_ddim + 1.0) / 2.0, min=0.0, max=1.0)
del samples_ddim del samples_ddim
...@@ -436,7 +463,8 @@ def process_images(p: StableDiffusionProcessing) -> Processed: ...@@ -436,7 +463,8 @@ def process_images(p: StableDiffusionProcessing) -> Processed:
text = infotext(n, i) text = infotext(n, i)
infotexts.append(text) infotexts.append(text)
image.info["parameters"] = text if opts.enable_pnginfo:
image.info["parameters"] = text
output_images.append(image) output_images.append(image)
del x_samples_ddim del x_samples_ddim
...@@ -455,7 +483,8 @@ def process_images(p: StableDiffusionProcessing) -> Processed: ...@@ -455,7 +483,8 @@ def process_images(p: StableDiffusionProcessing) -> Processed:
if opts.return_grid: if opts.return_grid:
text = infotext() text = infotext()
infotexts.insert(0, text) infotexts.insert(0, text)
grid.info["parameters"] = text if opts.enable_pnginfo:
grid.info["parameters"] = text
output_images.insert(0, grid) output_images.insert(0, grid)
index_of_first_image = 1 index_of_first_image = 1
...@@ -514,7 +543,7 @@ class StableDiffusionProcessingTxt2Img(StableDiffusionProcessing): ...@@ -514,7 +543,7 @@ class StableDiffusionProcessingTxt2Img(StableDiffusionProcessing):
if self.scale_latent: if self.scale_latent:
samples = torch.nn.functional.interpolate(samples, size=(self.height // opt_f, self.width // opt_f), mode="bilinear") samples = torch.nn.functional.interpolate(samples, size=(self.height // opt_f, self.width // opt_f), mode="bilinear")
else: else:
decoded_samples = self.sd_model.decode_first_stage(samples) decoded_samples = decode_first_stage(self.sd_model, samples)
if opts.upscaler_for_img2img is None or opts.upscaler_for_img2img == "None": if opts.upscaler_for_img2img is None or opts.upscaler_for_img2img == "None":
decoded_samples = torch.nn.functional.interpolate(decoded_samples, size=(self.height, self.width), mode="bilinear") decoded_samples = torch.nn.functional.interpolate(decoded_samples, size=(self.height, self.width), mode="bilinear")
......
...@@ -13,13 +13,14 @@ import lark ...@@ -13,13 +13,14 @@ import lark
schedule_parser = lark.Lark(r""" schedule_parser = lark.Lark(r"""
!start: (prompt | /[][():]/+)* !start: (prompt | /[][():]/+)*
prompt: (emphasized | scheduled | plain | WHITESPACE)* prompt: (emphasized | scheduled | alternate | plain | WHITESPACE)*
!emphasized: "(" prompt ")" !emphasized: "(" prompt ")"
| "(" prompt ":" prompt ")" | "(" prompt ":" prompt ")"
| "[" prompt "]" | "[" prompt "]"
scheduled: "[" [prompt ":"] prompt ":" [WHITESPACE] NUMBER "]" scheduled: "[" [prompt ":"] prompt ":" [WHITESPACE] NUMBER "]"
alternate: "[" prompt ("|" prompt)+ "]"
WHITESPACE: /\s+/ WHITESPACE: /\s+/
plain: /([^\\\[\]():]|\\.)+/ plain: /([^\\\[\]():|]|\\.)+/
%import common.SIGNED_NUMBER -> NUMBER %import common.SIGNED_NUMBER -> NUMBER
""") """)
...@@ -59,6 +60,8 @@ def get_learned_conditioning_prompt_schedules(prompts, steps): ...@@ -59,6 +60,8 @@ def get_learned_conditioning_prompt_schedules(prompts, steps):
tree.children[-1] *= steps tree.children[-1] *= steps
tree.children[-1] = min(steps, int(tree.children[-1])) tree.children[-1] = min(steps, int(tree.children[-1]))
l.append(tree.children[-1]) l.append(tree.children[-1])
def alternate(self, tree):
l.extend(range(1, steps+1))
CollectSteps().visit(tree) CollectSteps().visit(tree)
return sorted(set(l)) return sorted(set(l))
...@@ -67,6 +70,8 @@ def get_learned_conditioning_prompt_schedules(prompts, steps): ...@@ -67,6 +70,8 @@ def get_learned_conditioning_prompt_schedules(prompts, steps):
def scheduled(self, args): def scheduled(self, args):
before, after, _, when = args before, after, _, when = args
yield before or () if step <= when else after yield before or () if step <= when else after
def alternate(self, args):
yield next(args[(step - 1)%len(args)])
def start(self, args): def start(self, args):
def flatten(x): def flatten(x):
if type(x) == str: if type(x) == str:
...@@ -239,6 +244,15 @@ def reconstruct_multicond_batch(c: MulticondLearnedConditioning, current_step): ...@@ -239,6 +244,15 @@ def reconstruct_multicond_batch(c: MulticondLearnedConditioning, current_step):
conds_list.append(conds_for_batch) conds_list.append(conds_for_batch)
# if prompts have wildly different lengths above the limit we'll get tensors fo different shapes
# and won't be able to torch.stack them. So this fixes that.
token_count = max([x.shape[0] for x in tensors])
for i in range(len(tensors)):
if tensors[i].shape[0] != token_count:
last_vector = tensors[i][-1:]
last_vector_repeated = last_vector.repeat([token_count - tensors[i].shape[0], 1])
tensors[i] = torch.vstack([tensors[i], last_vector_repeated])
return conds_list, torch.stack(tensors).to(device=param.device, dtype=param.dtype) return conds_list, torch.stack(tensors).to(device=param.device, dtype=param.dtype)
......
...@@ -8,14 +8,12 @@ from basicsr.utils.download_util import load_file_from_url ...@@ -8,14 +8,12 @@ from basicsr.utils.download_util import load_file_from_url
from realesrgan import RealESRGANer from realesrgan import RealESRGANer
from modules.upscaler import Upscaler, UpscalerData from modules.upscaler import Upscaler, UpscalerData
from modules.paths import models_path
from modules.shared import cmd_opts, opts from modules.shared import cmd_opts, opts
class UpscalerRealESRGAN(Upscaler): class UpscalerRealESRGAN(Upscaler):
def __init__(self, path): def __init__(self, path):
self.name = "RealESRGAN" self.name = "RealESRGAN"
self.model_path = os.path.join(models_path, self.name)
self.user_path = path self.user_path = path
super().__init__() super().__init__()
try: try:
......
# this code is adapted from the script contributed by anon from /h/
import io
import pickle
import collections
import sys
import traceback
import torch
import numpy
import _codecs
import zipfile
# PyTorch 1.13 and later have _TypedStorage renamed to TypedStorage
TypedStorage = torch.storage.TypedStorage if hasattr(torch.storage, 'TypedStorage') else torch.storage._TypedStorage
def encode(*args):
out = _codecs.encode(*args)
return out
class RestrictedUnpickler(pickle.Unpickler):
def persistent_load(self, saved_id):
assert saved_id[0] == 'storage'
return TypedStorage()
def find_class(self, module, name):
if module == 'collections' and name == 'OrderedDict':
return getattr(collections, name)
if module == 'torch._utils' and name in ['_rebuild_tensor_v2', '_rebuild_parameter']:
return getattr(torch._utils, name)
if module == 'torch' and name in ['FloatStorage', 'HalfStorage', 'IntStorage', 'LongStorage', 'DoubleStorage']:
return getattr(torch, name)
if module == 'torch.nn.modules.container' and name in ['ParameterDict']:
return getattr(torch.nn.modules.container, name)
if module == 'numpy.core.multiarray' and name == 'scalar':
return numpy.core.multiarray.scalar
if module == 'numpy' and name == 'dtype':
return numpy.dtype
if module == '_codecs' and name == 'encode':
return encode
if module == "pytorch_lightning.callbacks" and name == 'model_checkpoint':
import pytorch_lightning.callbacks
return pytorch_lightning.callbacks.model_checkpoint
if module == "pytorch_lightning.callbacks.model_checkpoint" and name == 'ModelCheckpoint':
import pytorch_lightning.callbacks.model_checkpoint
return pytorch_lightning.callbacks.model_checkpoint.ModelCheckpoint
if module == "__builtin__" and name == 'set':
return set
# Forbid everything else.
raise pickle.UnpicklingError(f"global '{module}/{name}' is forbidden")
def check_pt(filename):
try:
# new pytorch format is a zip file
with zipfile.ZipFile(filename) as z:
with z.open('archive/data.pkl') as file:
unpickler = RestrictedUnpickler(file)
unpickler.load()
except zipfile.BadZipfile:
# if it's not a zip file, it's an olf pytorch format, with five objects written to pickle
with open(filename, "rb") as file:
unpickler = RestrictedUnpickler(file)
for i in range(5):
unpickler.load()
def load(filename, *args, **kwargs):
from modules import shared
try:
if not shared.cmd_opts.disable_safe_unpickle:
check_pt(filename)
except Exception:
print(f"Error verifying pickled file from {filename}:", file=sys.stderr)
print(traceback.format_exc(), file=sys.stderr)
print(f"\nThe file may be malicious, so the program is not going to read it.", file=sys.stderr)
print(f"You can skip this check with --disable-safe-unpickle commandline argument.", file=sys.stderr)
return None
return unsafe_torch_load(filename, *args, **kwargs)
unsafe_torch_load = torch.load
torch.load = load
...@@ -9,14 +9,12 @@ from basicsr.utils.download_util import load_file_from_url ...@@ -9,14 +9,12 @@ from basicsr.utils.download_util import load_file_from_url
import modules.upscaler import modules.upscaler
from modules import devices, modelloader from modules import devices, modelloader
from modules.paths import models_path
from modules.scunet_model_arch import SCUNet as net from modules.scunet_model_arch import SCUNet as net
class UpscalerScuNET(modules.upscaler.Upscaler): class UpscalerScuNET(modules.upscaler.Upscaler):
def __init__(self, dirname): def __init__(self, dirname):
self.name = "ScuNET" self.name = "ScuNET"
self.model_path = os.path.join(models_path, self.name)
self.model_name = "ScuNET GAN" self.model_name = "ScuNET GAN"
self.model_name2 = "ScuNET PSNR" self.model_name2 = "ScuNET PSNR"
self.model_url = "https://github.com/cszn/KAIR/releases/download/v1.0/scunet_color_real_gan.pth" self.model_url = "https://github.com/cszn/KAIR/releases/download/v1.0/scunet_color_real_gan.pth"
......
...@@ -40,7 +40,7 @@ class WMSA(nn.Module): ...@@ -40,7 +40,7 @@ class WMSA(nn.Module):
Returns: Returns:
attn_mask: should be (1 1 w p p), attn_mask: should be (1 1 w p p),
""" """
# supporting sqaure. # supporting square.
attn_mask = torch.zeros(h, w, p, p, p, p, dtype=torch.bool, device=self.relative_position_params.device) attn_mask = torch.zeros(h, w, p, p, p, p, dtype=torch.bool, device=self.relative_position_params.device)
if self.type == 'W': if self.type == 'W':
return attn_mask return attn_mask
...@@ -65,7 +65,7 @@ class WMSA(nn.Module): ...@@ -65,7 +65,7 @@ class WMSA(nn.Module):
x = rearrange(x, 'b (w1 p1) (w2 p2) c -> b w1 w2 p1 p2 c', p1=self.window_size, p2=self.window_size) x = rearrange(x, 'b (w1 p1) (w2 p2) c -> b w1 w2 p1 p2 c', p1=self.window_size, p2=self.window_size)
h_windows = x.size(1) h_windows = x.size(1)
w_windows = x.size(2) w_windows = x.size(2)
# sqaure validation # square validation
# assert h_windows == w_windows # assert h_windows == w_windows
x = rearrange(x, 'b w1 w2 p1 p2 c -> b (w1 w2) (p1 p2) c', p1=self.window_size, p2=self.window_size) x = rearrange(x, 'b w1 w2 p1 p2 c -> b (w1 w2) (p1 p2) c', p1=self.window_size, p2=self.window_size)
......
This diff is collapsed.
import math import math
import sys
import traceback
import torch import torch
from torch import einsum from torch import einsum
from ldm.util import default from ldm.util import default
from einops import rearrange from einops import rearrange
from modules import shared from modules import shared, hypernetwork
if shared.cmd_opts.xformers or shared.cmd_opts.force_enable_xformers:
try:
import xformers.ops
shared.xformers_available = True
except Exception:
print("Cannot import xformers", file=sys.stderr)
print(traceback.format_exc(), file=sys.stderr)
# see https://github.com/basujindal/stable-diffusion/pull/117 for discussion # see https://github.com/basujindal/stable-diffusion/pull/117 for discussion
def split_cross_attention_forward_v1(self, x, context=None, mask=None): def split_cross_attention_forward_v1(self, x, context=None, mask=None):
h = self.heads h = self.heads
q = self.to_q(x) q_in = self.to_q(x)
context = default(context, x) context = default(context, x)
k = self.to_k(context)
v = self.to_v(context)
del context, x
q, k, v = map(lambda t: rearrange(t, 'b n (h d) -> (b h) n d', h=h), (q, k, v)) context_k, context_v = hypernetwork.apply_hypernetwork(shared.loaded_hypernetwork, context)
k_in = self.to_k(context_k)
v_in = self.to_v(context_v)
del context, context_k, context_v, x
q, k, v = map(lambda t: rearrange(t, 'b n (h d) -> (b h) n d', h=h), (q_in, k_in, v_in))
del q_in, k_in, v_in
r1 = torch.zeros(q.shape[0], q.shape[1], v.shape[2], device=q.device) r1 = torch.zeros(q.shape[0], q.shape[1], v.shape[2], device=q.device)
for i in range(0, q.shape[0], 2): for i in range(0, q.shape[0], 2):
...@@ -31,6 +46,7 @@ def split_cross_attention_forward_v1(self, x, context=None, mask=None): ...@@ -31,6 +46,7 @@ def split_cross_attention_forward_v1(self, x, context=None, mask=None):
r1[i:end] = einsum('b i j, b j d -> b i d', s2, v[i:end]) r1[i:end] = einsum('b i j, b j d -> b i d', s2, v[i:end])
del s2 del s2
del q, k, v
r2 = rearrange(r1, '(b h) n d -> b n (h d)', h=h) r2 = rearrange(r1, '(b h) n d -> b n (h d)', h=h)
del r1 del r1
...@@ -38,21 +54,16 @@ def split_cross_attention_forward_v1(self, x, context=None, mask=None): ...@@ -38,21 +54,16 @@ def split_cross_attention_forward_v1(self, x, context=None, mask=None):
return self.to_out(r2) return self.to_out(r2)
# taken from https://github.com/Doggettx/stable-diffusion # taken from https://github.com/Doggettx/stable-diffusion and modified
def split_cross_attention_forward(self, x, context=None, mask=None): def split_cross_attention_forward(self, x, context=None, mask=None):
h = self.heads h = self.heads
q_in = self.to_q(x) q_in = self.to_q(x)
context = default(context, x) context = default(context, x)
hypernetwork_layers = (shared.hypernetwork.layers if shared.hypernetwork is not None else {}).get(context.shape[2], None) context_k, context_v = hypernetwork.apply_hypernetwork(shared.loaded_hypernetwork, context)
k_in = self.to_k(context_k)
if hypernetwork_layers is not None: v_in = self.to_v(context_v)
k_in = self.to_k(hypernetwork_layers[0](context))
v_in = self.to_v(hypernetwork_layers[1](context))
else:
k_in = self.to_k(context)
v_in = self.to_v(context)
k_in *= self.scale k_in *= self.scale
...@@ -104,6 +115,22 @@ def split_cross_attention_forward(self, x, context=None, mask=None): ...@@ -104,6 +115,22 @@ def split_cross_attention_forward(self, x, context=None, mask=None):
return self.to_out(r2) return self.to_out(r2)
def xformers_attention_forward(self, x, context=None, mask=None):
h = self.heads
q_in = self.to_q(x)
context = default(context, x)
context_k, context_v = hypernetwork.apply_hypernetwork(shared.loaded_hypernetwork, context)
k_in = self.to_k(context_k)
v_in = self.to_v(context_v)
q, k, v = map(lambda t: rearrange(t, 'b n (h d) -> b n h d', h=h), (q_in, k_in, v_in))
del q_in, k_in, v_in
out = xformers.ops.memory_efficient_attention(q, k, v, attn_bias=None)
out = rearrange(out, 'b n h d -> b n (h d)', h=h)
return self.to_out(out)
def cross_attention_attnblock_forward(self, x): def cross_attention_attnblock_forward(self, x):
h_ = x h_ = x
h_ = self.norm(h_) h_ = self.norm(h_)
...@@ -166,3 +193,16 @@ def cross_attention_attnblock_forward(self, x): ...@@ -166,3 +193,16 @@ def cross_attention_attnblock_forward(self, x):
h3 += x h3 += x
return h3 return h3
def xformers_attnblock_forward(self, x):
try:
h_ = x
h_ = self.norm(h_)
q1 = self.q(h_).contiguous()
k1 = self.k(h_).contiguous()
v = self.v(h_).contiguous()
out = xformers.ops.memory_efficient_attention(q1, k1, v)
out = self.proj_out(out)
return x + out
except NotImplementedError:
return cross_attention_attnblock_forward(self, x)
...@@ -5,7 +5,6 @@ from collections import namedtuple ...@@ -5,7 +5,6 @@ from collections import namedtuple
import torch import torch
from omegaconf import OmegaConf from omegaconf import OmegaConf
from ldm.util import instantiate_from_config from ldm.util import instantiate_from_config
from modules import shared, modelloader, devices from modules import shared, modelloader, devices
...@@ -14,7 +13,7 @@ from modules.paths import models_path ...@@ -14,7 +13,7 @@ from modules.paths import models_path
model_dir = "Stable-diffusion" model_dir = "Stable-diffusion"
model_path = os.path.abspath(os.path.join(models_path, model_dir)) model_path = os.path.abspath(os.path.join(models_path, model_dir))
CheckpointInfo = namedtuple("CheckpointInfo", ['filename', 'title', 'hash', 'model_name']) CheckpointInfo = namedtuple("CheckpointInfo", ['filename', 'title', 'hash', 'model_name', 'config'])
checkpoints_list = {} checkpoints_list = {}
try: try:
...@@ -63,14 +62,20 @@ def list_models(): ...@@ -63,14 +62,20 @@ def list_models():
if os.path.exists(cmd_ckpt): if os.path.exists(cmd_ckpt):
h = model_hash(cmd_ckpt) h = model_hash(cmd_ckpt)
title, short_model_name = modeltitle(cmd_ckpt, h) title, short_model_name = modeltitle(cmd_ckpt, h)
checkpoints_list[title] = CheckpointInfo(cmd_ckpt, title, h, short_model_name) checkpoints_list[title] = CheckpointInfo(cmd_ckpt, title, h, short_model_name, shared.cmd_opts.config)
shared.opts.data['sd_model_checkpoint'] = title shared.opts.data['sd_model_checkpoint'] = title
elif cmd_ckpt is not None and cmd_ckpt != shared.default_sd_model_file: elif cmd_ckpt is not None and cmd_ckpt != shared.default_sd_model_file:
print(f"Checkpoint in --ckpt argument not found (Possible it was moved to {model_path}: {cmd_ckpt}", file=sys.stderr) print(f"Checkpoint in --ckpt argument not found (Possible it was moved to {model_path}: {cmd_ckpt}", file=sys.stderr)
for filename in model_list: for filename in model_list:
h = model_hash(filename) h = model_hash(filename)
title, short_model_name = modeltitle(filename, h) title, short_model_name = modeltitle(filename, h)
checkpoints_list[title] = CheckpointInfo(filename, title, h, short_model_name)
basename, _ = os.path.splitext(filename)
config = basename + ".yaml"
if not os.path.exists(config):
config = shared.cmd_opts.config
checkpoints_list[title] = CheckpointInfo(filename, title, h, short_model_name, config)
def get_closet_checkpoint_match(searchString): def get_closet_checkpoint_match(searchString):
...@@ -116,13 +121,24 @@ def select_checkpoint(): ...@@ -116,13 +121,24 @@ def select_checkpoint():
return checkpoint_info return checkpoint_info
def load_model_weights(model, checkpoint_file, sd_model_hash): def get_state_dict_from_checkpoint(pl_sd):
if "state_dict" in pl_sd:
return pl_sd["state_dict"]
return pl_sd
def load_model_weights(model, checkpoint_info):
checkpoint_file = checkpoint_info.filename
sd_model_hash = checkpoint_info.hash
print(f"Loading weights [{sd_model_hash}] from {checkpoint_file}") print(f"Loading weights [{sd_model_hash}] from {checkpoint_file}")
pl_sd = torch.load(checkpoint_file, map_location="cpu") pl_sd = torch.load(checkpoint_file, map_location="cpu")
if "global_step" in pl_sd: if "global_step" in pl_sd:
print(f"Global Step: {pl_sd['global_step']}") print(f"Global Step: {pl_sd['global_step']}")
sd = pl_sd["state_dict"]
sd = get_state_dict_from_checkpoint(pl_sd)
model.load_state_dict(sd, strict=False) model.load_state_dict(sd, strict=False)
...@@ -133,8 +149,13 @@ def load_model_weights(model, checkpoint_file, sd_model_hash): ...@@ -133,8 +149,13 @@ def load_model_weights(model, checkpoint_file, sd_model_hash):
model.half() model.half()
devices.dtype = torch.float32 if shared.cmd_opts.no_half else torch.float16 devices.dtype = torch.float32 if shared.cmd_opts.no_half else torch.float16
devices.dtype_vae = torch.float32 if shared.cmd_opts.no_half or shared.cmd_opts.no_half_vae else torch.float16
vae_file = os.path.splitext(checkpoint_file)[0] + ".vae.pt" vae_file = os.path.splitext(checkpoint_file)[0] + ".vae.pt"
if not os.path.exists(vae_file) and shared.cmd_opts.vae_path is not None:
vae_file = shared.cmd_opts.vae_path
if os.path.exists(vae_file): if os.path.exists(vae_file):
print(f"Loading VAE weights from: {vae_file}") print(f"Loading VAE weights from: {vae_file}")
vae_ckpt = torch.load(vae_file, map_location="cpu") vae_ckpt = torch.load(vae_file, map_location="cpu")
...@@ -142,17 +163,23 @@ def load_model_weights(model, checkpoint_file, sd_model_hash): ...@@ -142,17 +163,23 @@ def load_model_weights(model, checkpoint_file, sd_model_hash):
model.first_stage_model.load_state_dict(vae_dict) model.first_stage_model.load_state_dict(vae_dict)
model.first_stage_model.to(devices.dtype_vae)
model.sd_model_hash = sd_model_hash model.sd_model_hash = sd_model_hash
model.sd_model_checkpint = checkpoint_file model.sd_model_checkpoint = checkpoint_file
model.sd_checkpoint_info = checkpoint_info
def load_model(): def load_model():
from modules import lowvram, sd_hijack from modules import lowvram, sd_hijack
checkpoint_info = select_checkpoint() checkpoint_info = select_checkpoint()
sd_config = OmegaConf.load(shared.cmd_opts.config) if checkpoint_info.config != shared.cmd_opts.config:
print(f"Loading config from: {checkpoint_info.config}")
sd_config = OmegaConf.load(checkpoint_info.config)
sd_model = instantiate_from_config(sd_config.model) sd_model = instantiate_from_config(sd_config.model)
load_model_weights(sd_model, checkpoint_info.filename, checkpoint_info.hash) load_model_weights(sd_model, checkpoint_info)
if shared.cmd_opts.lowvram or shared.cmd_opts.medvram: if shared.cmd_opts.lowvram or shared.cmd_opts.medvram:
lowvram.setup_for_low_vram(sd_model, shared.cmd_opts.medvram) lowvram.setup_for_low_vram(sd_model, shared.cmd_opts.medvram)
...@@ -171,9 +198,13 @@ def reload_model_weights(sd_model, info=None): ...@@ -171,9 +198,13 @@ def reload_model_weights(sd_model, info=None):
from modules import lowvram, devices, sd_hijack from modules import lowvram, devices, sd_hijack
checkpoint_info = info or select_checkpoint() checkpoint_info = info or select_checkpoint()
if sd_model.sd_model_checkpint == checkpoint_info.filename: if sd_model.sd_model_checkpoint == checkpoint_info.filename:
return return
if sd_model.sd_checkpoint_info.config != checkpoint_info.config:
shared.sd_model = load_model()
return shared.sd_model
if shared.cmd_opts.lowvram or shared.cmd_opts.medvram: if shared.cmd_opts.lowvram or shared.cmd_opts.medvram:
lowvram.send_everything_to_cpu() lowvram.send_everything_to_cpu()
else: else:
...@@ -181,7 +212,7 @@ def reload_model_weights(sd_model, info=None): ...@@ -181,7 +212,7 @@ def reload_model_weights(sd_model, info=None):
sd_hijack.model_hijack.undo_hijack(sd_model) sd_hijack.model_hijack.undo_hijack(sd_model)
load_model_weights(sd_model, checkpoint_info.filename, checkpoint_info.hash) load_model_weights(sd_model, checkpoint_info)
sd_hijack.model_hijack.hijack(sd_model) sd_hijack.model_hijack.hijack(sd_model)
......
...@@ -7,7 +7,7 @@ import inspect ...@@ -7,7 +7,7 @@ import inspect
import k_diffusion.sampling import k_diffusion.sampling
import ldm.models.diffusion.ddim import ldm.models.diffusion.ddim
import ldm.models.diffusion.plms import ldm.models.diffusion.plms
from modules import prompt_parser from modules import prompt_parser, devices, processing
from modules.shared import opts, cmd_opts, state from modules.shared import opts, cmd_opts, state
import modules.shared as shared import modules.shared as shared
...@@ -83,7 +83,7 @@ def setup_img2img_steps(p, steps=None): ...@@ -83,7 +83,7 @@ def setup_img2img_steps(p, steps=None):
def sample_to_image(samples): def sample_to_image(samples):
x_sample = shared.sd_model.decode_first_stage(samples[0:1].type(shared.sd_model.dtype))[0] x_sample = processing.decode_first_stage(shared.sd_model, samples[0:1])[0]
x_sample = torch.clamp((x_sample + 1.0) / 2.0, min=0.0, max=1.0) x_sample = torch.clamp((x_sample + 1.0) / 2.0, min=0.0, max=1.0)
x_sample = 255. * np.moveaxis(x_sample.cpu().numpy(), 0, 2) x_sample = 255. * np.moveaxis(x_sample.cpu().numpy(), 0, 2)
x_sample = x_sample.astype(np.uint8) x_sample = x_sample.astype(np.uint8)
...@@ -106,7 +106,7 @@ def extended_tdqm(sequence, *args, desc=None, **kwargs): ...@@ -106,7 +106,7 @@ def extended_tdqm(sequence, *args, desc=None, **kwargs):
seq = sequence if cmd_opts.disable_console_progressbars else tqdm.tqdm(sequence, *args, desc=state.job, file=shared.progress_print_out, **kwargs) seq = sequence if cmd_opts.disable_console_progressbars else tqdm.tqdm(sequence, *args, desc=state.job, file=shared.progress_print_out, **kwargs)
for x in seq: for x in seq:
if state.interrupted: if state.interrupted or state.skipped:
break break
yield x yield x
...@@ -142,6 +142,16 @@ class VanillaStableDiffusionSampler: ...@@ -142,6 +142,16 @@ class VanillaStableDiffusionSampler:
assert all([len(conds) == 1 for conds in conds_list]), 'composition via AND is not supported for DDIM/PLMS samplers' assert all([len(conds) == 1 for conds in conds_list]), 'composition via AND is not supported for DDIM/PLMS samplers'
cond = tensor cond = tensor
# for DDIM, shapes must match, we can't just process cond and uncond independently;
# filling unconditional_conditioning with repeats of the last vector to match length is
# not 100% correct but should work well enough
if unconditional_conditioning.shape[1] < cond.shape[1]:
last_vector = unconditional_conditioning[:, -1:]
last_vector_repeated = last_vector.repeat([1, cond.shape[1] - unconditional_conditioning.shape[1], 1])
unconditional_conditioning = torch.hstack([unconditional_conditioning, last_vector_repeated])
elif unconditional_conditioning.shape[1] > cond.shape[1]:
unconditional_conditioning = unconditional_conditioning[:, :cond.shape[1]]
if self.mask is not None: if self.mask is not None:
img_orig = self.sampler.model.q_sample(self.init_latent, ts) img_orig = self.sampler.model.q_sample(self.init_latent, ts)
x_dec = img_orig * self.mask + self.nmask * x_dec x_dec = img_orig * self.mask + self.nmask * x_dec
...@@ -171,7 +181,7 @@ class VanillaStableDiffusionSampler: ...@@ -171,7 +181,7 @@ class VanillaStableDiffusionSampler:
self.initialize(p) self.initialize(p)
# existing code fails with cetain step counts, like 9 # existing code fails with certain step counts, like 9
try: try:
self.sampler.make_schedule(ddim_num_steps=steps, ddim_eta=self.eta, ddim_discretize=p.ddim_discretize, verbose=False) self.sampler.make_schedule(ddim_num_steps=steps, ddim_eta=self.eta, ddim_discretize=p.ddim_discretize, verbose=False)
except Exception: except Exception:
...@@ -194,7 +204,7 @@ class VanillaStableDiffusionSampler: ...@@ -194,7 +204,7 @@ class VanillaStableDiffusionSampler:
steps = steps or p.steps steps = steps or p.steps
# existing code fails with cetin step counts, like 9 # existing code fails with certain step counts, like 9
try: try:
samples_ddim, _ = self.sampler.sample(S=steps, conditioning=conditioning, batch_size=int(x.shape[0]), shape=x[0].shape, verbose=False, unconditional_guidance_scale=p.cfg_scale, unconditional_conditioning=unconditional_conditioning, x_T=x, eta=self.eta) samples_ddim, _ = self.sampler.sample(S=steps, conditioning=conditioning, batch_size=int(x.shape[0]), shape=x[0].shape, verbose=False, unconditional_guidance_scale=p.cfg_scale, unconditional_conditioning=unconditional_conditioning, x_T=x, eta=self.eta)
except Exception: except Exception:
...@@ -221,18 +231,29 @@ class CFGDenoiser(torch.nn.Module): ...@@ -221,18 +231,29 @@ class CFGDenoiser(torch.nn.Module):
x_in = torch.cat([torch.stack([x[i] for _ in range(n)]) for i, n in enumerate(repeats)] + [x]) x_in = torch.cat([torch.stack([x[i] for _ in range(n)]) for i, n in enumerate(repeats)] + [x])
sigma_in = torch.cat([torch.stack([sigma[i] for _ in range(n)]) for i, n in enumerate(repeats)] + [sigma]) sigma_in = torch.cat([torch.stack([sigma[i] for _ in range(n)]) for i, n in enumerate(repeats)] + [sigma])
cond_in = torch.cat([tensor, uncond])
if shared.batch_cond_uncond: if tensor.shape[1] == uncond.shape[1]:
x_out = self.inner_model(x_in, sigma_in, cond=cond_in) cond_in = torch.cat([tensor, uncond])
if shared.batch_cond_uncond:
x_out = self.inner_model(x_in, sigma_in, cond=cond_in)
else:
x_out = torch.zeros_like(x_in)
for batch_offset in range(0, x_out.shape[0], batch_size):
a = batch_offset
b = a + batch_size
x_out[a:b] = self.inner_model(x_in[a:b], sigma_in[a:b], cond=cond_in[a:b])
else: else:
x_out = torch.zeros_like(x_in) x_out = torch.zeros_like(x_in)
for batch_offset in range(0, x_out.shape[0], batch_size): batch_size = batch_size*2 if shared.batch_cond_uncond else batch_size
for batch_offset in range(0, tensor.shape[0], batch_size):
a = batch_offset a = batch_offset
b = a + batch_size b = min(a + batch_size, tensor.shape[0])
x_out[a:b] = self.inner_model(x_in[a:b], sigma_in[a:b], cond=cond_in[a:b]) x_out[a:b] = self.inner_model(x_in[a:b], sigma_in[a:b], cond=tensor[a:b])
x_out[-uncond.shape[0]:] = self.inner_model(x_in[-uncond.shape[0]:], sigma_in[-uncond.shape[0]:], cond=uncond)
denoised_uncond = x_out[-batch_size:] denoised_uncond = x_out[-uncond.shape[0]:]
denoised = torch.clone(denoised_uncond) denoised = torch.clone(denoised_uncond)
for i, conds in enumerate(conds_list): for i, conds in enumerate(conds_list):
...@@ -254,7 +275,7 @@ def extended_trange(sampler, count, *args, **kwargs): ...@@ -254,7 +275,7 @@ def extended_trange(sampler, count, *args, **kwargs):
seq = range(count) if cmd_opts.disable_console_progressbars else tqdm.trange(count, *args, desc=state.job, file=shared.progress_print_out, **kwargs) seq = range(count) if cmd_opts.disable_console_progressbars else tqdm.trange(count, *args, desc=state.job, file=shared.progress_print_out, **kwargs)
for x in seq: for x in seq:
if state.interrupted: if state.interrupted or state.skipped:
break break
if sampler.stop_at is not None and x > sampler.stop_at: if sampler.stop_at is not None and x > sampler.stop_at:
......
This diff is collapsed.
...@@ -8,9 +8,9 @@ from basicsr.utils.download_util import load_file_from_url ...@@ -8,9 +8,9 @@ from basicsr.utils.download_util import load_file_from_url
from tqdm import tqdm from tqdm import tqdm
from modules import modelloader from modules import modelloader
from modules.paths import models_path
from modules.shared import cmd_opts, opts, device from modules.shared import cmd_opts, opts, device
from modules.swinir_model_arch import SwinIR as net from modules.swinir_model_arch import SwinIR as net
from modules.swinir_model_arch_v2 import Swin2SR as net2
from modules.upscaler import Upscaler, UpscalerData from modules.upscaler import Upscaler, UpscalerData
precision_scope = ( precision_scope = (
...@@ -25,7 +25,6 @@ class UpscalerSwinIR(Upscaler): ...@@ -25,7 +25,6 @@ class UpscalerSwinIR(Upscaler):
"/003_realSR_BSRGAN_DFOWMFC_s64w8_SwinIR" \ "/003_realSR_BSRGAN_DFOWMFC_s64w8_SwinIR" \
"-L_x4_GAN.pth " "-L_x4_GAN.pth "
self.model_name = "SwinIR 4x" self.model_name = "SwinIR 4x"
self.model_path = os.path.join(models_path, self.name)
self.user_path = dirname self.user_path = dirname
super().__init__() super().__init__()
scalers = [] scalers = []
...@@ -59,22 +58,42 @@ class UpscalerSwinIR(Upscaler): ...@@ -59,22 +58,42 @@ class UpscalerSwinIR(Upscaler):
filename = path filename = path
if filename is None or not os.path.exists(filename): if filename is None or not os.path.exists(filename):
return None return None
model = net( if filename.endswith(".v2.pth"):
model = net2(
upscale=scale, upscale=scale,
in_chans=3, in_chans=3,
img_size=64, img_size=64,
window_size=8, window_size=8,
img_range=1.0, img_range=1.0,
depths=[6, 6, 6, 6, 6, 6, 6, 6, 6], depths=[6, 6, 6, 6, 6, 6],
embed_dim=240, embed_dim=180,
num_heads=[8, 8, 8, 8, 8, 8, 8, 8, 8], num_heads=[6, 6, 6, 6, 6, 6],
mlp_ratio=2, mlp_ratio=2,
upsampler="nearest+conv", upsampler="nearest+conv",
resi_connection="3conv", resi_connection="1conv",
) )
params = None
else:
model = net(
upscale=scale,
in_chans=3,
img_size=64,
window_size=8,
img_range=1.0,
depths=[6, 6, 6, 6, 6, 6, 6, 6, 6],
embed_dim=240,
num_heads=[8, 8, 8, 8, 8, 8, 8, 8, 8],
mlp_ratio=2,
upsampler="nearest+conv",
resi_connection="3conv",
)
params = "params_ema"
pretrained_model = torch.load(filename) pretrained_model = torch.load(filename)
model.load_state_dict(pretrained_model["params_ema"], strict=True) if params is not None:
model.load_state_dict(pretrained_model[params], strict=True)
else:
model.load_state_dict(pretrained_model, strict=True)
if not cmd_opts.no_half: if not cmd_opts.no_half:
model = model.half() model = model.half()
return model return model
......
...@@ -166,7 +166,7 @@ class SwinTransformerBlock(nn.Module): ...@@ -166,7 +166,7 @@ class SwinTransformerBlock(nn.Module):
Args: Args:
dim (int): Number of input channels. dim (int): Number of input channels.
input_resolution (tuple[int]): Input resulotion. input_resolution (tuple[int]): Input resolution.
num_heads (int): Number of attention heads. num_heads (int): Number of attention heads.
window_size (int): Window size. window_size (int): Window size.
shift_size (int): Shift size for SW-MSA. shift_size (int): Shift size for SW-MSA.
......
This diff is collapsed.
...@@ -15,11 +15,10 @@ re_tag = re.compile(r"[a-zA-Z][_\w\d()]+") ...@@ -15,11 +15,10 @@ re_tag = re.compile(r"[a-zA-Z][_\w\d()]+")
class PersonalizedBase(Dataset): class PersonalizedBase(Dataset):
def __init__(self, data_root, size=None, repeats=100, flip_p=0.5, placeholder_token="*", width=512, height=512, model=None, device=None, template_file=None): def __init__(self, data_root, width, height, repeats, flip_p=0.5, placeholder_token="*", model=None, device=None, template_file=None):
self.placeholder_token = placeholder_token self.placeholder_token = placeholder_token
self.size = size
self.width = width self.width = width
self.height = height self.height = height
self.flip = transforms.RandomHorizontalFlip(p=flip_p) self.flip = transforms.RandomHorizontalFlip(p=flip_p)
......
...@@ -7,8 +7,9 @@ import tqdm ...@@ -7,8 +7,9 @@ import tqdm
from modules import shared, images from modules import shared, images
def preprocess(process_src, process_dst, process_flip, process_split, process_caption): def preprocess(process_src, process_dst, process_width, process_height, process_flip, process_split, process_caption):
size = 512 width = process_width
height = process_height
src = os.path.abspath(process_src) src = os.path.abspath(process_src)
dst = os.path.abspath(process_dst) dst = os.path.abspath(process_dst)
...@@ -55,23 +56,23 @@ def preprocess(process_src, process_dst, process_flip, process_split, process_ca ...@@ -55,23 +56,23 @@ def preprocess(process_src, process_dst, process_flip, process_split, process_ca
is_wide = ratio < 1 / 1.35 is_wide = ratio < 1 / 1.35
if process_split and is_tall: if process_split and is_tall:
img = img.resize((size, size * img.height // img.width)) img = img.resize((width, height * img.height // img.width))
top = img.crop((0, 0, size, size)) top = img.crop((0, 0, width, height))
save_pic(top, index) save_pic(top, index)
bot = img.crop((0, img.height - size, size, img.height)) bot = img.crop((0, img.height - height, width, img.height))
save_pic(bot, index) save_pic(bot, index)
elif process_split and is_wide: elif process_split and is_wide:
img = img.resize((size * img.width // img.height, size)) img = img.resize((width * img.width // img.height, height))
left = img.crop((0, 0, size, size)) left = img.crop((0, 0, width, height))
save_pic(left, index) save_pic(left, index)
right = img.crop((img.width - size, 0, img.width, size)) right = img.crop((img.width - width, 0, img.width, height))
save_pic(right, index) save_pic(right, index)
else: else:
img = images.resize_image(1, img, size, size) img = images.resize_image(1, img, width, height)
save_pic(img, index) save_pic(img, index)
shared.state.nextjob() shared.state.nextjob()
......
...@@ -156,7 +156,7 @@ def create_embedding(name, num_vectors_per_token, init_text='*'): ...@@ -156,7 +156,7 @@ def create_embedding(name, num_vectors_per_token, init_text='*'):
return fn return fn
def train_embedding(embedding_name, learn_rate, data_root, log_directory, steps, create_image_every, save_embedding_every, template_file): def train_embedding(embedding_name, learn_rate, data_root, log_directory, training_width, training_height, steps, num_repeats, create_image_every, save_embedding_every, template_file):
assert embedding_name, 'embedding not selected' assert embedding_name, 'embedding not selected'
shared.state.textinfo = "Initializing textual inversion training..." shared.state.textinfo = "Initializing textual inversion training..."
...@@ -182,7 +182,7 @@ def train_embedding(embedding_name, learn_rate, data_root, log_directory, steps, ...@@ -182,7 +182,7 @@ def train_embedding(embedding_name, learn_rate, data_root, log_directory, steps,
shared.state.textinfo = f"Preparing dataset from {html.escape(data_root)}..." shared.state.textinfo = f"Preparing dataset from {html.escape(data_root)}..."
with torch.autocast("cuda"): with torch.autocast("cuda"):
ds = modules.textual_inversion.dataset.PersonalizedBase(data_root=data_root, size=512, placeholder_token=embedding_name, model=shared.sd_model, device=devices.device, template_file=template_file) ds = modules.textual_inversion.dataset.PersonalizedBase(data_root=data_root, width=training_width, height=training_height, repeats=num_repeats, placeholder_token=embedding_name, model=shared.sd_model, device=devices.device, template_file=template_file)
hijack = sd_hijack.model_hijack hijack = sd_hijack.model_hijack
...@@ -200,6 +200,9 @@ def train_embedding(embedding_name, learn_rate, data_root, log_directory, steps, ...@@ -200,6 +200,9 @@ def train_embedding(embedding_name, learn_rate, data_root, log_directory, steps,
if ititial_step > steps: if ititial_step > steps:
return embedding, filename return embedding, filename
tr_img_len = len([os.path.join(data_root, file_path) for file_path in os.listdir(data_root)])
epoch_len = (tr_img_len * num_repeats) + tr_img_len
pbar = tqdm.tqdm(enumerate(ds), total=steps-ititial_step) pbar = tqdm.tqdm(enumerate(ds), total=steps-ititial_step)
for i, (x, text) in pbar: for i, (x, text) in pbar:
embedding.step = i + ititial_step embedding.step = i + ititial_step
...@@ -223,7 +226,10 @@ def train_embedding(embedding_name, learn_rate, data_root, log_directory, steps, ...@@ -223,7 +226,10 @@ def train_embedding(embedding_name, learn_rate, data_root, log_directory, steps,
loss.backward() loss.backward()
optimizer.step() optimizer.step()
pbar.set_description(f"loss: {losses.mean():.7f}") epoch_num = embedding.step // epoch_len
epoch_step = embedding.step - (epoch_num * epoch_len) + 1
pbar.set_description(f"[Epoch {epoch_num}: {epoch_step}/{epoch_len}]loss: {losses.mean():.7f}")
if embedding.step > 0 and embedding_dir is not None and embedding.step % save_embedding_every == 0: if embedding.step > 0 and embedding_dir is not None and embedding.step % save_embedding_every == 0:
last_saved_file = os.path.join(embedding_dir, f'{embedding_name}-{embedding.step}.pt') last_saved_file = os.path.join(embedding_dir, f'{embedding_name}-{embedding.step}.pt')
...@@ -236,6 +242,8 @@ def train_embedding(embedding_name, learn_rate, data_root, log_directory, steps, ...@@ -236,6 +242,8 @@ def train_embedding(embedding_name, learn_rate, data_root, log_directory, steps,
sd_model=shared.sd_model, sd_model=shared.sd_model,
prompt=text, prompt=text,
steps=20, steps=20,
height=training_height,
width=training_width,
do_not_save_grid=True, do_not_save_grid=True,
do_not_save_samples=True, do_not_save_samples=True,
) )
......
This diff is collapsed.
...@@ -36,10 +36,11 @@ class Upscaler: ...@@ -36,10 +36,11 @@ class Upscaler:
self.half = not modules.shared.cmd_opts.no_half self.half = not modules.shared.cmd_opts.no_half
self.pre_pad = 0 self.pre_pad = 0
self.mod_scale = None self.mod_scale = None
if self.name is not None and create_dirs:
if self.model_path is None and self.name:
self.model_path = os.path.join(models_path, self.name) self.model_path = os.path.join(models_path, self.name)
if not os.path.exists(self.model_path): if self.model_path and create_dirs:
os.makedirs(self.model_path) os.makedirs(self.model_path, exist_ok=True)
try: try:
import cv2 import cv2
......
...@@ -6,6 +6,10 @@ function get_uiCurrentTab() { ...@@ -6,6 +6,10 @@ function get_uiCurrentTab() {
return gradioApp().querySelector('.tabs button:not(.border-transparent)') return gradioApp().querySelector('.tabs button:not(.border-transparent)')
} }
function get_uiCurrentTabContent() {
return gradioApp().querySelector('.tabitem[id^=tab_]:not([style*="display: none"])')
}
uiUpdateCallbacks = [] uiUpdateCallbacks = []
uiTabChangeCallbacks = [] uiTabChangeCallbacks = []
let uiCurrentTab = null let uiCurrentTab = null
...@@ -40,6 +44,25 @@ document.addEventListener("DOMContentLoaded", function() { ...@@ -40,6 +44,25 @@ document.addEventListener("DOMContentLoaded", function() {
mutationObserver.observe( gradioApp(), { childList:true, subtree:true }) mutationObserver.observe( gradioApp(), { childList:true, subtree:true })
}); });
/**
* Add a ctrl+enter as a shortcut to start a generation
*/
document.addEventListener('keydown', function(e) {
var handled = false;
if (e.key !== undefined) {
if((e.key == "Enter" && (e.metaKey || e.ctrlKey))) handled = true;
} else if (e.keyCode !== undefined) {
if((e.keyCode == 13 && (e.metaKey || e.ctrlKey))) handled = true;
}
if (handled) {
button = get_uiCurrentTabContent().querySelector('button[id$=_generate]');
if (button) {
button.click();
}
e.preventDefault();
}
})
/** /**
* checks that a UI element is not in another hidden element or tab content * checks that a UI element is not in another hidden element or tab content
*/ */
......
...@@ -10,7 +10,6 @@ from modules.processing import Processed, process_images ...@@ -10,7 +10,6 @@ from modules.processing import Processed, process_images
from PIL import Image from PIL import Image
from modules.shared import opts, cmd_opts, state from modules.shared import opts, cmd_opts, state
class Script(scripts.Script): class Script(scripts.Script):
def title(self): def title(self):
return "Prompts from file or textbox" return "Prompts from file or textbox"
...@@ -29,6 +28,9 @@ class Script(scripts.Script): ...@@ -29,6 +28,9 @@ class Script(scripts.Script):
checkbox_txt.change(fn=lambda x: [gr.File.update(visible = not x), gr.TextArea.update(visible = x)], inputs=[checkbox_txt], outputs=[file, prompt_txt]) checkbox_txt.change(fn=lambda x: [gr.File.update(visible = not x), gr.TextArea.update(visible = x)], inputs=[checkbox_txt], outputs=[file, prompt_txt])
return [checkbox_txt, file, prompt_txt] return [checkbox_txt, file, prompt_txt]
def on_show(self, checkbox_txt, file, prompt_txt):
return [ gr.Checkbox.update(visible = True), gr.File.update(visible = not checkbox_txt), gr.TextArea.update(visible = checkbox_txt) ]
def run(self, p, checkbox_txt, data: bytes, prompt_txt: str): def run(self, p, checkbox_txt, data: bytes, prompt_txt: str):
if (checkbox_txt): if (checkbox_txt):
lines = [x.strip() for x in prompt_txt.splitlines()] lines = [x.strip() for x in prompt_txt.splitlines()]
......
...@@ -10,8 +10,8 @@ import numpy as np ...@@ -10,8 +10,8 @@ import numpy as np
import modules.scripts as scripts import modules.scripts as scripts
import gradio as gr import gradio as gr
from modules import images from modules import images, hypernetwork
from modules.processing import process_images, Processed from modules.processing import process_images, Processed, get_correct_sampler
from modules.shared import opts, cmd_opts, state from modules.shared import opts, cmd_opts, state
import modules.shared as shared import modules.shared as shared
import modules.sd_samplers import modules.sd_samplers
...@@ -56,15 +56,17 @@ def apply_order(p, x, xs): ...@@ -56,15 +56,17 @@ def apply_order(p, x, xs):
p.prompt = prompt_tmp + p.prompt p.prompt = prompt_tmp + p.prompt
samplers_dict = {} def build_samplers_dict(p):
for i, sampler in enumerate(modules.sd_samplers.samplers): samplers_dict = {}
samplers_dict[sampler.name.lower()] = i for i, sampler in enumerate(get_correct_sampler(p)):
for alias in sampler.aliases: samplers_dict[sampler.name.lower()] = i
samplers_dict[alias.lower()] = i for alias in sampler.aliases:
samplers_dict[alias.lower()] = i
return samplers_dict
def apply_sampler(p, x, xs): def apply_sampler(p, x, xs):
sampler_index = samplers_dict.get(x.lower(), None) sampler_index = build_samplers_dict(p).get(x.lower(), None)
if sampler_index is None: if sampler_index is None:
raise RuntimeError(f"Unknown sampler: {x}") raise RuntimeError(f"Unknown sampler: {x}")
...@@ -78,7 +80,11 @@ def apply_checkpoint(p, x, xs): ...@@ -78,7 +80,11 @@ def apply_checkpoint(p, x, xs):
def apply_hypernetwork(p, x, xs): def apply_hypernetwork(p, x, xs):
shared.hypernetwork = shared.hypernetworks.get(x, None) hypernetwork.load_hypernetwork(x)
def apply_clip_skip(p, x, xs):
opts.data["CLIP_stop_at_last_layers"] = x
def format_value_add_label(p, opt, x): def format_value_add_label(p, opt, x):
...@@ -132,6 +138,7 @@ axis_options = [ ...@@ -132,6 +138,7 @@ axis_options = [
AxisOption("Sigma max", float, apply_field("s_tmax"), format_value_add_label), AxisOption("Sigma max", float, apply_field("s_tmax"), format_value_add_label),
AxisOption("Sigma noise", float, apply_field("s_noise"), format_value_add_label), AxisOption("Sigma noise", float, apply_field("s_noise"), format_value_add_label),
AxisOption("Eta", float, apply_field("eta"), format_value_add_label), AxisOption("Eta", float, apply_field("eta"), format_value_add_label),
AxisOption("Clip skip", int, apply_clip_skip, format_value_add_label),
AxisOptionImg2Img("Denoising", float, apply_field("denoising_strength"), format_value_add_label), # as it is now all AxisOptionImg2Img items must go after AxisOption ones AxisOptionImg2Img("Denoising", float, apply_field("denoising_strength"), format_value_add_label), # as it is now all AxisOptionImg2Img items must go after AxisOption ones
] ]
...@@ -142,7 +149,7 @@ def draw_xy_grid(p, xs, ys, x_labels, y_labels, cell, draw_legend): ...@@ -142,7 +149,7 @@ def draw_xy_grid(p, xs, ys, x_labels, y_labels, cell, draw_legend):
ver_texts = [[images.GridAnnotation(y)] for y in y_labels] ver_texts = [[images.GridAnnotation(y)] for y in y_labels]
hor_texts = [[images.GridAnnotation(x)] for x in x_labels] hor_texts = [[images.GridAnnotation(x)] for x in x_labels]
first_pocessed = None first_processed = None
state.job_count = len(xs) * len(ys) * p.n_iter state.job_count = len(xs) * len(ys) * p.n_iter
...@@ -151,8 +158,8 @@ def draw_xy_grid(p, xs, ys, x_labels, y_labels, cell, draw_legend): ...@@ -151,8 +158,8 @@ def draw_xy_grid(p, xs, ys, x_labels, y_labels, cell, draw_legend):
state.job = f"{ix + iy * len(xs) + 1} out of {len(xs) * len(ys)}" state.job = f"{ix + iy * len(xs) + 1} out of {len(xs) * len(ys)}"
processed = cell(x, y) processed = cell(x, y)
if first_pocessed is None: if first_processed is None:
first_pocessed = processed first_processed = processed
try: try:
res.append(processed.images[0]) res.append(processed.images[0])
...@@ -163,9 +170,9 @@ def draw_xy_grid(p, xs, ys, x_labels, y_labels, cell, draw_legend): ...@@ -163,9 +170,9 @@ def draw_xy_grid(p, xs, ys, x_labels, y_labels, cell, draw_legend):
if draw_legend: if draw_legend:
grid = images.draw_grid_annotations(grid, res[0].width, res[0].height, hor_texts, ver_texts) grid = images.draw_grid_annotations(grid, res[0].width, res[0].height, hor_texts, ver_texts)
first_pocessed.images = [grid] first_processed.images = [grid]
return first_pocessed return first_processed
re_range = re.compile(r"\s*([+-]?\s*\d+)\s*-\s*([+-]?\s*\d+)(?:\s*\(([+-]\d+)\s*\))?\s*") re_range = re.compile(r"\s*([+-]?\s*\d+)\s*-\s*([+-]?\s*\d+)(?:\s*\(([+-]\d+)\s*\))?\s*")
...@@ -195,10 +202,14 @@ class Script(scripts.Script): ...@@ -195,10 +202,14 @@ class Script(scripts.Script):
return [x_type, x_values, y_type, y_values, draw_legend, no_fixed_seeds] return [x_type, x_values, y_type, y_values, draw_legend, no_fixed_seeds]
def run(self, p, x_type, x_values, y_type, y_values, draw_legend, no_fixed_seeds): def run(self, p, x_type, x_values, y_type, y_values, draw_legend, no_fixed_seeds):
modules.processing.fix_seed(p) if not no_fixed_seeds:
p.batch_size = 1 modules.processing.fix_seed(p)
if not opts.return_grid:
p.batch_size = 1
initial_hn = shared.hypernetwork
CLIP_stop_at_last_layers = opts.CLIP_stop_at_last_layers
def process_axis(opt, vals): def process_axis(opt, vals):
if opt.label == 'Nothing': if opt.label == 'Nothing':
...@@ -213,7 +224,6 @@ class Script(scripts.Script): ...@@ -213,7 +224,6 @@ class Script(scripts.Script):
m = re_range.fullmatch(val) m = re_range.fullmatch(val)
mc = re_range_count.fullmatch(val) mc = re_range_count.fullmatch(val)
if m is not None: if m is not None:
start = int(m.group(1)) start = int(m.group(1))
end = int(m.group(2))+1 end = int(m.group(2))+1
step = int(m.group(3)) if m.group(3) is not None else 1 step = int(m.group(3)) if m.group(3) is not None else 1
...@@ -255,6 +265,17 @@ class Script(scripts.Script): ...@@ -255,6 +265,17 @@ class Script(scripts.Script):
valslist = list(permutations(valslist)) valslist = list(permutations(valslist))
valslist = [opt.type(x) for x in valslist] valslist = [opt.type(x) for x in valslist]
# Confirm options are valid before starting
if opt.label == "Sampler":
samplers_dict = build_samplers_dict(p)
for sampler_val in valslist:
if sampler_val.lower() not in samplers_dict.keys():
raise RuntimeError(f"Unknown sampler: {sampler_val}")
elif opt.label == "Checkpoint name":
for ckpt_val in valslist:
if modules.sd_models.get_closet_checkpoint_match(ckpt_val) is None:
raise RuntimeError(f"Checkpoint for {ckpt_val} not found")
return valslist return valslist
...@@ -307,6 +328,8 @@ class Script(scripts.Script): ...@@ -307,6 +328,8 @@ class Script(scripts.Script):
# restore checkpoint in case it was changed by axes # restore checkpoint in case it was changed by axes
modules.sd_models.reload_model_weights(shared.sd_model) modules.sd_models.reload_model_weights(shared.sd_model)
shared.hypernetwork = initial_hn hypernetwork.load_hypernetwork(opts.sd_hypernetwork)
opts.data["CLIP_stop_at_last_layers"] = CLIP_stop_at_last_layers
return processed return processed
.container {
max-width: 100%;
}
.output-html p {margin: 0 0.5em;} .output-html p {margin: 0 0.5em;}
.row > *, .row > *,
...@@ -103,7 +107,12 @@ ...@@ -103,7 +107,12 @@
#style_apply, #style_create, #interrogate{ #style_apply, #style_create, #interrogate{
margin: 0.75em 0.25em 0.25em 0.25em; margin: 0.75em 0.25em 0.25em 0.25em;
min-width: 3em; min-width: 5em;
}
#style_apply, #style_create, #deepbooru{
margin: 0.75em 0.25em 0.25em 0.25em;
min-width: 5em;
} }
#style_pos_col, #style_neg_col{ #style_pos_col, #style_neg_col{
...@@ -393,10 +402,20 @@ input[type="range"]{ ...@@ -393,10 +402,20 @@ input[type="range"]{
#txt2img_interrupt, #img2img_interrupt{ #txt2img_interrupt, #img2img_interrupt{
position: absolute; position: absolute;
width: 100%; width: 50%;
height: 72px; height: 72px;
background: #b4c0cc; background: #b4c0cc;
border-radius: 8px; border-radius: 0px;
display: none;
}
#txt2img_skip, #img2img_skip{
position: absolute;
width: 50%;
right: 0px;
height: 72px;
background: #b4c0cc;
border-radius: 0px;
display: none; display: none;
} }
...@@ -410,4 +429,48 @@ input[type="range"]{ ...@@ -410,4 +429,48 @@ input[type="range"]{
#img2img_image div.h-60{ #img2img_image div.h-60{
height: 480px; height: 480px;
} }
\ No newline at end of file
#context-menu{
z-index:9999;
position:absolute;
display:block;
padding:0px 0;
border:2px solid #a55000;
border-radius:8px;
box-shadow:1px 1px 2px #CE6400;
width: 200px;
}
.context-menu-items{
list-style: none;
margin: 0;
padding: 0;
}
.context-menu-items a{
display:block;
padding:5px;
cursor:pointer;
}
.context-menu-items a:hover{
background: #a55000;
}
#quicksettings > div{
border: none;
background: none;
}
#quicksettings > div > div{
max-width: 32em;
padding: 0;
}
canvas[key="mask"] {
z-index: 12 !important;
filter: invert();
mix-blend-mode: multiply;
pointer-events: none;
}
txt2img_Screenshot.png

526 KB | W: | H:

txt2img_Screenshot.png

329 KB | W: | H:

txt2img_Screenshot.png
txt2img_Screenshot.png
txt2img_Screenshot.png
txt2img_Screenshot.png
  • 2-up
  • Swipe
  • Onion skin
...@@ -5,6 +5,8 @@ import importlib ...@@ -5,6 +5,8 @@ import importlib
import signal import signal
import threading import threading
from fastapi.middleware.gzip import GZipMiddleware
from modules.paths import script_path from modules.paths import script_path
from modules import devices, sd_samplers from modules import devices, sd_samplers
...@@ -58,6 +60,7 @@ def wrap_gradio_gpu_call(func, extra_outputs=None): ...@@ -58,6 +60,7 @@ def wrap_gradio_gpu_call(func, extra_outputs=None):
shared.state.current_latent = None shared.state.current_latent = None
shared.state.current_image = None shared.state.current_image = None
shared.state.current_image_sampling_step = 0 shared.state.current_image_sampling_step = 0
shared.state.skipped = False
shared.state.interrupted = False shared.state.interrupted = False
shared.state.textinfo = None shared.state.textinfo = None
...@@ -88,6 +91,9 @@ modules.scripts.load_scripts(os.path.join(script_path, "scripts")) ...@@ -88,6 +91,9 @@ modules.scripts.load_scripts(os.path.join(script_path, "scripts"))
shared.sd_model = modules.sd_models.load_model() shared.sd_model = modules.sd_models.load_model()
shared.opts.onchange("sd_model_checkpoint", wrap_queued_call(lambda: modules.sd_models.reload_model_weights(shared.sd_model))) shared.opts.onchange("sd_model_checkpoint", wrap_queued_call(lambda: modules.sd_models.reload_model_weights(shared.sd_model)))
loaded_hypernetwork = modules.hypernetwork.load_hypernetwork(shared.opts.sd_hypernetwork)
shared.opts.onchange("sd_hypernetwork", wrap_queued_call(lambda: modules.hypernetwork.load_hypernetwork(shared.opts.sd_hypernetwork)))
def webui(): def webui():
# make the program just exit at ctrl+c without waiting for anything # make the program just exit at ctrl+c without waiting for anything
...@@ -101,7 +107,7 @@ def webui(): ...@@ -101,7 +107,7 @@ def webui():
demo = modules.ui.create_ui(wrap_gradio_gpu_call=wrap_gradio_gpu_call) demo = modules.ui.create_ui(wrap_gradio_gpu_call=wrap_gradio_gpu_call)
demo.launch( app,local_url,share_url = demo.launch(
share=cmd_opts.share, share=cmd_opts.share,
server_name="0.0.0.0" if cmd_opts.listen else None, server_name="0.0.0.0" if cmd_opts.listen else None,
server_port=cmd_opts.port, server_port=cmd_opts.port,
...@@ -110,6 +116,8 @@ def webui(): ...@@ -110,6 +116,8 @@ def webui():
inbrowser=cmd_opts.autolaunch, inbrowser=cmd_opts.autolaunch,
prevent_thread_lock=True prevent_thread_lock=True
) )
app.add_middleware(GZipMiddleware,minimum_size=1000)
while 1: while 1:
time.sleep(0.5) time.sleep(0.5)
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment