Coder Social home page Coder Social logo

sd-webui-lora-block-weight's Introduction

LoRA Block Weight

Important

If you have an error :ValueError: could not convert string to float
use new syntax<lora:"lora name":1:lbw=IN02>

Updates/更新情報

2024.04.06.0000(JST)

2023.11.22.2000(JST)

  • bugfix
  • added new feature:start in steps
  • 機能追加:LoRAの途中開始

2023.11.21.1930(JST)

  • added new feature:stop in steps
  • 機能追加:LoRAの途中停止
    By specifying <lora:"lora name":lbw=ALL:stop=10>, you can disable the effect of LoRA at the specified step. In the case of character or composition LoRA, a sufficient effect is achieved in about 10 steps, and by cutting it off at this point, it is possible to minimize the impact on the style of the painting
    <lora:"lora name":lbw=ALL:stop=10>と指定することで指定したstepでLoRAの効果を無くします。キャラクターや構図LoRAの場合には10 step程度で十分な効果があり、ここで切ることで画風への影響を抑えることが可能です。

Overview

Lora is a powerful tool, but it is sometimes difficult to use and can affect areas that you do not want it to affect. This script allows you to set the weights block-by-block. Using this script, you may be able to get the image you want.

Usage

Place lora_block_weight.py in the script folder.
Or you can install from Extentions tab in web-ui. When installing, please restart web-ui.bat.

Active

Check this box to activate it.

Prompt

In the prompt box, enter the Lora you wish to use as usual. Enter the weight or identifier by typing ":" after the strength value. The identifier can be edited in the Weights setting.

<lora:"lora name":1:0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0>.  
<lora:"lora name":1:0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0>.  (a1111-sd-webui-locon, etc.)
<lyco:"lora name":1:1:lbw=IN02>  (a1111-sd-webui-lycoris, web-ui 1.5 or later)
<lyco:"lora name":1:1:lbw=1,1,1,1,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0>  (a1111-sd-webui-lycoris, web-ui 1.5 or later)

For LyCORIS using a1111-sd-webui-lycoris, syntax is different. lbw=IN02 is used and follow lycoirs syntax for others such as unet or else. a1111-sd-webui-lycoris is under under development, so this syntax might be changed.

Lora strength is in effect and applies to the entire Blocks.
It is case-sensitive. For LyCORIS, full-model blobks used,so you need to input 26 weights. You can use weight for LoRA, in this case, the weight of blocks not in LoRA is set to 0.   If the above format is not used, the preset will treat it as a comment line.

start, stop step

By specifying <lora:"lora name":lbw=ALL:start=10>, the effect of LoRA appears from the designated step. By specifying <lora:"lora name":lbw=ALL:stop=10>, the effect of LoRA is eliminated at the specified step. In the case of character or composition LoRA, a significant effect is achieved in about 10 steps, and by cutting it off at this point, it is possible to minimize the influence on the style of the painting. By specifying <lora:"lora name":lbw=ALL:step=5-10>, LoRA is activated only between steps 5-10."

Weights Setting

Enter the identifier and weights. Unlike the full model, Lora is divided into 17 blocks, including the encoder. Therefore, enter 17 values. BASE, IN, OUT, etc. are the blocks equivalent to the full model. Due to various formats such as Full Model and LyCORIS and SDXL, script currently accept weights for 12, 17, 20, and 26. Generally, even if weights in incompatible formats are inputted, the system will still function. However, any layers not provided will be treated as having a weight of 0.

LoRA(17)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
BASE IN01 IN02 IN04 IN05 IN07 IN08 MID OUT03 OUT04 OUT05 OUT06 OUT07 OUT08 OUT09 OUT10 OUT11

LyCORIS, etc. (26)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26
BASE IN00 IN01 IN02 IN03 IN04 IN05 IN06 IN07 IN08 IN09 IN10 IN11 MID OUT00 OUT01 OUT02 OUT03 OUT04 OUT05 OUT06 OUT07 OUT08 OUT09 OUT10 OUT11

SDXL LoRA(12)

1 2 3 4 5 6 7 8 9 10 11 12
BASE IN04 IN05 IN07 IN08 MID OUT0 OUT1 OUT2 OUT3 OUT4 OUT05

SDXL - LyCORIS/LoCon(20)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
BASE IN00 IN01 IN02 IN03 IN04 IN05 IN06 IN07 IN08 MID OUT00 OUT01 OUT02 OUT03 OUT04 OUT05 OUT06 OUT07

Special Values (Random)

Basically, a numerical value must be entered to work correctly, but by entering R and U, a random value will be entered.
R : Numerical value with 3 decimal places from 0~1 U : 3 decimal places from -1.5 to 1.5

For example, if ROUT:1,1,1,1,1,1,1,1,R,R,R,R,R,R,R,R,R
Only the OUT blocks is randomized. The randomized values will be displayed on the command prompt screen when the image is generated.

Special Values (Dynamic)

The special value X may also be included to use a dynamic weight specified in the LoRA syntax. This is activated by including an additional weight value after the specified Original Weight.

For example, if ROUT:X,1,1,1,1,1,1,1,1,1,1,1,X,X,X,X,X and you had a prompt containing <lora:my_lore:0.5:ROUT:0.7>. The X weights in ROUT would be replaced with 0.7 at runtime.

NOTE: If you select an Original Weight tag that has a dynamic weight (X) and you do not specify a value in the LoRA syntax, it will default to 1.

Save Presets

The "Save Presets" button saves the text in the current text box. It is better to use a text editor, so use the "Open TextEditor" button to open a text editor, edit the text, and reload it.
The text box above the Weights setting is a list of currently available identifiers, useful for copying and pasting into an XY plot. 17 identifiers are required to appear in the list.

Fun Usage

Used in conjunction with the XY plot, it is possible to examine the impact of each level of the hierarchy.
xy_grid-0017-4285963917

The setting values are as follows.
NOT:0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
ALL:1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1
INS:1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0
IND:1,0,0,0,1,1,1,1,0,0,0,0,0,0,0,0,0,0
INALL:1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0
MIDD:1,0,0,0,1,1,1,1,1,1,1,1,1,0,0,0,0,0
OUTD:1,0,0,0,0,0,0,0,0,1,1,1,1,1,0,0,0,0
OUTS:1,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1
OUTALL:1,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1

XYZ Plotting Function

The optimal value can be searched by changing the value of each layer individually.

Usage

Check "Active" to activate the function. If Script (such as XYZ plot in Automatic1111) is enabled, it will take precedence. Hires. fix is not supported. batch size is fixed to 1. batch count should be set to 1.
Enter XYZ as the identifier of the LoRA that you want to change. It will work even if you do not enter a value corresponding to XYZ in the preset. If a value corresponding to XYZ is entered, that value will be used as the initial value.
Inputting ZYX, inverted value will be automatically inputted. This feature enables to match weights of two LoRAs.
Inputing XYZ for LoRA1 and ZYX for LoRA2, you get,
LoRA1 1,0,0,0,1,1,1,1,1,1,1,1,0,0,0,0,0
LoRA2 0,1,1,1,0,0,0,0,0,0,0,0,1,1,1,1,1

Axis type

values

Sets the weight of the hierarchy to be changed. Enter the values separated by commas. 0,0.25,0.5,0.75,1", etc.

Block ID

If a block ID is entered, only that block will change to the value specified by value. As with the other types, use commas to separate them. Multiple blocks can be changed at the same time by separating them with a space or hyphen. The initial NOT will invert the change, so NOT IN09-OUT02 will change all blocks except IN09-OUT02.

seed

Seed changes, and is intended to be specified on the Z-axis.

Original Weights

Specify the initial value to change the weight of each block. If Original Weight is enabled, the value entered for XYZ is ignored.

Input example

X : value, value : 1,0.25,0.5,0.75,1
Y : Block ID, value : BASE,IN01-IN08,IN05-OUT05,OUT03-OUT11,NOT OUT03-OUT11
Z : Original Weights, Value : NONE,ALL0.5,ALL

In this case, an XY plot is created corresponding to the initial values NONE,ALL0.5,ALL. If you select Seed for Z and enter -1,-1,-1, the XY plot will be created 3 times with different seeds.

Original Weights Combined XY Plot

If both X and Y are set to Original Weights then an XY plot is made by combining the weights. If both X and Y have a weight in the same block then the Y case is set to zero before adding the arrays, this value will be used during the YX case where X's value is then set to zero. The intended usage is without overlapping blocks.

Given these names and values in the "Weights setting":
INS:1,1,1,0,0,0,0,0,0,0,0,0
MID:1,0,0,0,0,1,0,0,0,0,0,0
OUTD:1,0,0,0,0,0,1,1,1,0,0,0

With:
X : Original Weights, value: INS,MID,OUTD
Y : Original Weights, value: INS,MID,OUTD
Z : none

An XY plot is made with 9 elements. The diagonal is the X values: INS,MID,OUTD unchanged. So we have for the first row:

INS+INS  = 1,1,1,0,0,0,0,0,0,0,0,0 (Just INS unchanged, first image on the diagonal)
MID+INS  = 1,1,1,0,0,1,0,0,0,0,0,0 (second column of first row)
OUTD+INS = 1,1,1,0,0,0,1,1,1,0,0,0 (third column of first row)

Then the next row is INS+MID, MID+MID, OUTD+MID, and so on. Example image here

Effective Block Analyzer

This function check which layers are working well. The effect of the block is visualized and quantified by setting the intensity of the other bocks to 1, decreasing the intensity of the block you want to examine, and taking the difference.

Range

If you enter 0.5, 1, all initial values are set to 1, and only the target block is calculated as 0.5. Normally, 0.5 will make a difference, but some LoRAs may have difficulty making a difference, in which case, set 0.5 to 0 or a negative value.

settings

diff color

Specify the background color of the diff file.

chnage X-Y

Swaps the X and Y axes. By default, Block is assigned to the Y axis.

Threshold

Sets the threshold at which a change is recognized when calculating the difference. Basically, the default value is fine, but if you want to detect subtle differences in color, etc., lower the value.

Blocks

Enter the blocks to be examined, using the same format as for XYZ plots.

Here is the English translation in Markdown format:

Guide for API users

Regular Usage

By default, Active is checked in the initial settings, so you can use it simply by installing it. You can use it by entering the format as instructed in the prompt. If executed, the phrase "LoRA Block Weight" will appear on the command prompt screen. If for some reason Active is not enabled, you can make it active by entering a value in the API for "alwayson_scripts". When you enable API mode and use the UI, two extensions will appear. Please use the one on the bottom. The default presets can be used for presets. If you want to use your own presets, you can either edit the preset file or use the following format for the data passed to the API.

The code that can be used when passing to the API in json format is as follows. The presets you enter here will become available. If you want to use multiple presets, please separate them with \n.

"prompt": "myprompt, <lora:mylora:1:MYSETS>",
"alwayson_scripts": {
    "LoRA Block Weight": {
        "args": ["MYSETS:1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0\nYOURSETS:0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1", true, 1 ,"","","","","","","","","","","","","",""]
    }
}

XYZ Plot

Please use the format below. Please delete "alwayson_scripts" as it will cause an error.

"prompt": "myprompt, <lora:mylora:1:XYZ>",
"script_name":"LoRA Block Weight",
"script_args": ["XYZ:1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1", true, 1 ,"seed","-1,-1","","","","","","","","","","","",""]

In this case, the six following True,1 correspond to xtype,xvalues,ytype,yvalues,ztype,zvalues. It will be ignored if left blank. Please follow the instructions in the XYZ plot section for entering values. Even numbers should be enclosed in "".

The following types are available.

"none","Block ID","values","seed","Original Weights","elements"

Effective Block Analyzer

It can be used by using the following format.

"prompt": "myprompt, <lora:mylora:1:XYZ>",
"script_name":"LoRA Block Weight",
"script_args": ["XYZ:1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1", true, 2 ,"","","","","","","0,1","17ALL",1,"white",20,true,"",""]

For "0,1", specify the weight. If you specify "17ALL", it will examine all the layers of the normal LoRA. If you want to specify individually, please write like "BASE,IN00,IN01,IN02". Specify whether to reverse XY for True in the "1" for the number of times you want to check (if it is 2 or more, multiple seeds will be set), and white for the background color.

Make Weights

In "make weights," you can create a weight list from a slider. When you press the "add to preset" button, the weight specified by the identifier is added to the end of the preset. If a preset with the same name already exists, it will be overwritten. The "add to preset and save" button allows you to save the preset simultaneously. makeweights

概要

Loraは強力なツールですが、時に扱いが難しく、影響してほしくないところにまで影響がでたりします。このスクリプトではLoraを適用する際、適用度合いをU-Netの階層ごとに設定することができます。これを使用することで求める画像に近づけることができるかもしれません。

使い方

インストール時はWeb-ui.batを再起動をしてください。

Active

ここにチェックを入れることで動作します。

プロンプト

プロンプト画面では通常通り使用したいLoraを記入してください。その際、強さの値の次に「:」を入力しウェイトか識別子を入力します。識別子はWeights setting で編集します。

<lora:"lora name":1:0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0>.  
<lora:"lora name":1:0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0>.  (a1111-sd-webui-locon, etc.)
<lora:"lora name":1:1:lbw=IN02>  (a1111-sd-webui-lycoris, web-ui 1.5 or later)
<lora:"lora name":1:1:lbw=1,1,1,1,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0>  (a1111-sd-webui-lycoris, web-ui 1.5 or later)
<lora:"lora name":1:1:lbw=IN02:stop=10>

Loraの強さは有効で、階層全体にかかります。大文字と小文字は区別されます。 LyCORISに対してLoRAのプリセットも使用できますが、その場合LoRAで使われていない階層のウェイトは0に設定されます。
上記の形式になっていない場合プリセットではコメント行として扱われます。 a1111-sd-webui-lycoris版のLyCORISや、ver1.5以降のweb-uiを使用する場合構文が異なります。lbw=IN02を使って下さい。順番は問いません。その他の書式はlycorisの書式にしたがって下さい。詳しくはLyCORISのドキュメントを参照して下さい。識別子を入力して下さい。a1111-sd-webui-lycoris版は開発途中のためこの構文は変更される可能性があります。

start, stop step

<lora:"lora name":lbw=ALL:start=10>と指定すると、指定したstepからLoRAの効果が現れます。
<lora:"lora name":lbw=ALL:stop=10>と指定することで指定したstepでLoRAの効果を無くします。キャラクターや構図LoRAの場合には10 step程度で十分な効果があり、ここで切ることで画風への影響を抑えることが可能です。
<lora:"lora name":lbw=ALL:step=5-10>と指定するとstep 5-10の間のみLoRAが有効化します。

Weights setting

識別子とウェイトを入力します。 フルモデルと異なり、Loraではエンコーダーを含め17のブロックに分かれています。よって、17個の数値を入力してください。 BASE,IN,OUTなどはフルモデル相当の階層です。 フルモデルやLyCORIS、SDXLなど様々な形式があるため、現状では12,17,20,26のウェイトを受け付けます。基本的に形式が合わないウェイトを入力しても動作しますが、未入力の層は0として扱われます。

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
BASE IN01 IN02 IN04 IN05 IN07 IN08 MID OUT03 OUT04 OUT05 OUT06 OUT07 OUT08 OUT09 OUT10 OUT11

LyCORISなどの場合(26)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26
BASE IN00 IN01 IN02 IN03 IN04 IN05 IN06 IN07 IN08 IN09 IN10 IN11 MID OUT00 OUT01 OUT02 OUT03 OUT04 OUT05 OUT06 OUT07 OUT08 OUT09 OUT10 OUT11

SDXL LoRAの場合(12)

1 2 3 4 5 6 7 8 9 10 11 12
BASE IN04 IN05 IN07 IN08 MID OUT0 OUT1 OUT2 OUT3 OUT4 OUT05

SDXL - LyCORISの場合(20)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
BASE IN00 IN01 IN02 IN03 IN04 IN05 IN06 IN07 IN08 MID OUT00 OUT01 OUT02 OUT03 OUT04 OUT05 OUT06 OUT07

特別な値

基本的には数値を入れないと正しく動きませんが R および U を入力することでランダムな数値が入力されます。
R : 0~1までの小数点3桁の数値 U : -1.5~1.5までの小数点3桁の数値

例えば ROUT:1,1,1,1,1,1,1,1,R,R,R,R,R,R,R,R,R とすると
OUT層のみダンダム化されます
ランダム化された数値は画像生成時にコマンドプロンプト画面に表示されます

saveボタンで現在のテキストボックスのテキストを保存できます。テキストエディタを使った方がいいので、open Texteditorボタンでテキストエディタ開き、編集後reloadしてください。
Weights settingの上にあるテキストボックスは現在使用できる識別子の一覧です。XYプロットにコピペするのに便利です。17個ないと一覧に表示されません。

楽しい使い方

XY plotと併用することで各階層の影響を調べることが可能になります。
xy_grid-0017-4285963917

設定値は以下の通りです。
NOT:0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
ALL:1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1
INS:1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0
IND:1,0,0,0,1,1,1,0,0,0,0,0,0,0,0,0,0
INALL:1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0
MIDD:1,0,0,0,1,1,1,1,1,1,1,1,0,0,0,0,0
OUTD:1,0,0,0,0,0,0,0,1,1,1,1,0,0,0,0,0
OUTS:1,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1
OUTALL:1,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1

XYZ プロット機能

各層の値を個別に変化させることで最適値を総当たりに探せます。

使い方

Activeをチェックすることで動作します。 Script(Automatic1111本体のXYZプロットなど)が有効になっている場合そちらが優先されます。noneを選択してください。 Hires. fixには対応していません。Batch sizeは1に固定されます。Batch countは1に設定してください。
変化させたいLoRAの識別子にXYZと入力します<lora:"lora名":1:XYZ>。 プリセットにXYZに対応する値を入力していなくても動作します。その場合すべてのウェイトが0の状態からスタートします。XYZに対応する値が入力されている場合はその値が初期値になります。
ZYXと入力するとXYZとは反対の値が入力されます。これはふたつのLoRAのウェイトを合わせる際に有効です。 例えばLoRA1にXYZ,LoRA2にZYXと入力すると、
LoRA1 1,0,0,0,1,1,1,1,1,1,1,1,0,0,0,0,0
LoRA2 0,1,1,1,0,0,0,0,0,0,0,0,1,1,1,1,1
となります。

軸タイプ

values

変化させる階層のウェイトを設定します。カンマ区切りで入力してください。「0,0.25,0.5,0.75,1」など。

Block ID

ブロックIDを入力すると、そのブロックのみvalueで指定した値に変わります。他のタイプと同様にカンマで区切ります。スペースまたはハイフンで区切ることで複数のブロックを同時に変化させることもできます。最初にNOTをつけることで変化対象が反転します。NOT IN09-OUT02とすると、IN09-OUT02以外が変化します。NOTは最初に入力しないと効果がありません。IN08-M00-OUT03は繋がっています。

Seed

シードが変わります。Z軸に指定することを想定しています。

Original Weights

各ブロックのウェイトを変化させる初期値を指定します。プリセットに登録されている識別子を入力してください。Original Weightが有効になっている場合XYZに入力された値は無視されます。

Original Weightsの合算

もしXとYが両方ともOriginal Weightsに設定されている場合、その重みを組み合わせてXYプロットが作成されます。XとYの両方が同じブロックに重みがある場合、配列を加算する前にYケースはゼロに設定されます。この値は、Xの値がゼロに設定されるYXケースで使用されます。意図されている使用方法は、重複するブロックなしでのものです。

"Weights setting"に以下の名前と値が与えられているとします:
INS:1,1,1,0,0,0,0,0,0,0,0,0
MID:1,0,0,0,0,1,0,0,0,0,0,0
OUTD:1,0,0,0,0,0,1,1,1,0,0,0

以下の設定で:
X : Original Weights, 値: INS,MID,OUTD
Y : Original Weights, 値: INS,MID,OUTD
Z : なし

9つの要素を持つXYプロットが作成されます。対角線上は、変更されていないXの値:INS,MID,OUTDです。したがって、最初の行は以下のようになります:

INS+INS  = 1,1,1,0,0,0,0,0,0,0,0,0 (変更されていないINSだけ、対角線上の最初の画像)
MID+INS  = 1,1,1,0,0,1,0,0,0,0,0,0 (最初の行の第2列)
OUTD+INS = 1,1,1,0,0,0,1,1,1,0,0,0 (最初の行の第3列)

次の行は、INS+MID、MID+MID、OUTD+MIDなどです。例の画像はこちらです。

入力例

X : value, 値 : 1,0.25,0.5,0.75,1
Y : Block ID, 値 : BASE,IN01-IN08,IN05-OUT05,OUT03-OUT11,NOT OUT03-OUT11
Z : Original Weights, 値 : NONE,ALL0.5,ALL

この場合、初期値NONE,ALL0.5,ALLに対応したXY plotが作製されます。 ZにSeedを選び、-1,-1,-1を入力すると、異なるseedでXY plotを3回作製します。

Effective Block Analyzer

どの階層が良く効いているかを判別する機能です。対象の階層以外の強度を1にして、調べたい階層の強度を下げ、差分を取ることで階層の効果を可視化・数値化します。

Range

0.5, 1 と入力した場合、初期値がすべて1になり、対象のブロックのみ0.5として計算が行われます。普通は0.5で差がでますが、LoRAによっては差が出にくい場合があるので、その場合は0.5を0あるいはマイナスの値に設定してください。

設定

diff color

差分ファイルの背景カラーを指定します。

chnage X-Y

X軸とY軸を入れ替えます。デフォルトではY軸にBlockが割り当てられています。

Threshold

差分を計算する際の変化したと認識される閾値を設定します。基本的にはデフォルト値で問題ありませんが、微妙な色の違いなどを検出したい場合は値を下げて下さい。

Blocks

調べたい階層を入力します。XYZプロットと同じ書式が使用可能です。

階層別マージについては下記を参照してください

elemental

詳細はこちらを参照して下さい。

使い方

Elementaタブにて階層指定と同じように識別子を設定します。識別子は階層の識別子の後に入力します。 <lora:"lora名":1:IN04:ATTNON> ATTNON:

書式は
識別子:階層指定:要素指定:ウェイト
のように指定します。要素は部分一致で判定されます。attn1ならattn1のみ、attnならattn1及びattn2が反応します。階層、要素共に空白で区切ると複数指定できます。
print changeをオンにすると反応した要素がコマンドプロンプト上に表示されます。

ALL0:::0
はすべての要素のウェイトをゼロに設定します。
IN1:IN00-IN11::1
はINのすべての要素を1にします
ATTNON::attn:1 はすべての階層のattnを1にします。

XYZプロット

XYZプロットのelementsの項ではカンマ区切りでXYZプロットが可能になります。 その場合は
<lora:"lora名":1:XYZ:XYZ>
と指定して下さい。 elements
の項に
IN05-OUT05:attn:0,IN05-OUT05:attn:0.5,IN05-OUT05:attn:1
と入力して走らせるとIN05からOUT05までのattnのみを変化させることができます。 この際、XYZの値を変更することで初期値を変更できます。デフォルトではelementalのXYZはXYZ:::1となっており、これは全階層、全要素を1にしますが、ここをXYZ:encoder::1とするとテキストエンコーダーのみを有効にした状態で評価ができます。

ウェイトの作成

make weightsではスライダーからウェイトリストを作成できます。 add to presetボタンを押すと、identiferで指定されたウェイトがプリセットの末尾に追加されます。 すでに同じ名前のプリセットが存在する場合、上書きされます。 add to preset and saveボタンでは同時にプリセットの保存が行われます。 makeweights

APIを通しての利用について

通常利用

初期設定でActiveはチェックされているのでインストールするだけで利用可能になります。 プロンプトに書式通りに入力することで利用できます。実行された場合にはコマンドプロンプト画面に「LoRA Block Weight」の文字が現れます。 何らかの理由でActiveになっていない場合にはAPIに投げる値のうち、"alwayson_scripts"に値を入力することでActiveにできます。 APIモードを有効にしてUIを使うとき、拡張がふたつ表示されます。下の方を使って下さい。 プリセットはデフォルトのプリセットが利用できます。独自のプリセットを利用したい場合にはプリセットファイルを編集するか、APIに受け渡すデータに対して下記の書式を利用して下さい。 json形式でAPIに受け渡すときに使用できるコードです。ここで入力したプリセットが利用可能になります。複数のプリセットを利用したい場合には\nで区切って下さい。

"prompt": "myprompt, <lora:mylora:1:MYSETS>",
"alwayson_scripts": {
	"LoRA Block Weight": {
		"args": ["MYSETS:1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0\nYOURSETS:0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1", True, 1 ,"","","","","","","","","","","","","",""]
	}
}

XYZ plot

下記の書式を利用して下さい。"alwayson_scripts"は消して下さいエラーになります。

    "prompt": "myprompt, <lora:mylora:1:XYZ>",
    "script_name":"LoRA Block Weight",
    "script_args": ["XYZ:1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1", True, 1 ,"seed","-1,-1","","","","","","","","","","","",""]

この際、True,1に続く6個がxtype,xvalues,ytype,yvalues,ztype,zvaluesに対応します。空白だと無視されます。入力する値などはXYZ plotの項に従って下さい。数字でもすべて""で囲って下さい。 使用できるタイプは次の通りです。

"none","Block ID","values","seed","Original Weights","elements"

Effective Block Analyzer

下記のような書式を使うことで使用できます。

    "prompt": "myprompt, <lora:mylora:1:XYZ>",
    "script_name":"LoRA Block Weight",
 "script_args": ["XYZ:1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1", True, 2 ,"","","","","","","0,1","17ALL",1,"white",20,True,"",""]

"0,1"にはウェイト。"17ALL"を指定すると普通のLoRAすべての階層を調べます。個別に指定したい場合は"BASE,IN00,IN01,IN02"のように記述して下さい。1には調べたい回数(2以上だと複数のシードを設定します),whiteには背景色,TrueにはXYを反転するかどうかを指定して下さい。

updates/更新情報

2023.10.26.2000(JST)

  • bugfix:Effective block checker does not work correctly.
  • bugfix:Does not work correctly when lora in memory is set to a value other than 0.

2023.10.04.2000(JST)

XYZ plotに新たな機能が追加されました。sometimesacoder氏に感謝します。
A new feature was added to the XYZ plot. Many thanks to sometimesacoder.

2023.07.22.0030(JST)

  • support SDXL
  • support web-ui 1.5
  • support no buildin-LoRA system(lycoris required)

to use with web-ui 1.5/web-ui1.5で使うときは

<lora:"lora name":1:1:lbw=IN02>  
<lora:"lora name":1:1:lbw=1,1,1,1,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0>  

2023.07.14.2000(JST)

2023.5.24.2000(JST)

  • changed directory for presets(extentions/sd-webui-lora-block-weight/scripts/)
  • プリセットの保存フォルダがextentions/sd-webui-lora-block-weight/scripts/に変更になりました。

2023.5.12.2100(JST)

  • changed syntax of lycoris
  • lycorisの書式を変更しました

2023.04.14.2000(JST)

  • support LyCORIS(a1111-sd-webui-lycoris)
  • LyCORIS(a1111-sd-webui-lycoris)に対応

2023.03.20.2030(JST)

  • Comment lines can now be added to presets
  • プリセットにコメント行を追加できるようになりました
  • support XYZ plot hires.fix
  • XYZプロットがhires.fixに対応しました

2023.03.16.2030(JST)

別途LyCORIS Extentionが必要です。 For use LyCORIS, Extension for LyCORIS needed.

2023.02.07 1250(JST)

  • Changed behavior when XYZ plot Active (Script of the main UI is prioritized).

2023.02.06 2000(JST)

  • Feature added: XYZ plotting is added.

2023.01.31 0200(JST)

  • Feature added: Random feature is added
  • Fixed: Weighting now works for negative values.

2023.02.16 2040(JST)

  • Original Weight をxやyに設定できない問題を解決しました
  • Effective Weight Analyzer選択時にXYZのXやYがValuesとBlockIdになっていないとエラーになる問題を解決しました

2023.02.08 2120(JST)

  • 階層適応した後通常使用する際、階層適応が残る問題を解決しました
  • 効果のある階層をワンクリックで判別する機能を追加しました

2023.02.08 0050(JST)

  • 一部環境でseedが固定されない問題を解決しました

2023.02.07 2015(JST)

  • マイナスのウェイトが正常に働かない問題を修正しました

2023.02.07 1250(JST)

  • XYZプロットActive時の動作を変更しました(本体のScriptが優先されるようになります)

2023.02.06 2000(JST)

  • 機能追加:XYZプロット機能を追加しました

2023.01.31 0200(JST)

  • 機能追加:ランダム機能を追加しました
  • 機能修正:ウェイトがマイナスにも効くようになりました

sd-webui-lora-block-weight's People

Contributors

akegarasu avatar alulkesh avatar daniel-poke avatar hako-mikan avatar manaball123 avatar naokisato102 avatar nonnonstop avatar oedosoldier avatar sometimesacoder avatar storyicon avatar torara46 avatar zeng-hq avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sd-webui-lora-block-weight's Issues

LoRA block weight change takes NO EFFECT in new versions of stable diffusion webui

LoRA block weight change takes NO EFFECT after commit 80b26d2a of sd webui.

Here is the link to a commit: 80b26d2a commit

Commit message:
apply Lora by altering layer's weights instead of adding more calculations in forward()

What's the problem?

First, generate an image with a lora prompt: <lora:example:1:ALL>->img1
Then, fix seed, and generate another image with a lora prompt: <lora:example:1:NONE>->img2
You will find that img2 is the same as img1.❌
But if you change the multiplier of added lora, just a little bit: <lora:example:0.99:NONE>->img3
img3 is the result that img2 should be.✔

So, why?

This commit changed the behaviour of the builtin lora extension. In previous versions, the lora is applied to model by adding a "lora forward" step in forward steps. After this commit, the lora is applied by altering layers' weights. And the author add a cache mechanism to avoid applying lora every time.

This change leads to a problem:
If this script changes the weights of a lora layer (for example, from <lora:example:1:ALL> to <lora:example:1:NONE>), the changed value will NOT take any effect because the cache thinks that the changed lora layer is the same as the previous one, then the lora extension do NOTHING.
But if you changed the multiplier a little bit (for example, from <lora:example:1:NONE> to <lora:example:0.99:NONE>), the cache is dropped, then the weight changes of LoRA blocks takes effect.

Solution

Here is a solution:
Change 262 line in stable-diffusion-webui/extensions-builtin/lora/lora.py:
if current_names != wanted_names:->if True:
That will disable the cache and force lora extension to reapply lora every time.

But I don't think it's a good solution, maybe we can modify this script to fix this problem.

Forgive my poor English, and feel free to ask me more about this problem.🙂

ComfyUI requires lora-block-weight

Hello, lora-block-weight is a good extension, Recently, due to work reasons, we have to transfer the workflow from auto111 to comfyUI. However, lora-block-weight is essential. If the author or some code master has time, PLS create a lora-block-weight node for comfyUI,

Thank you. I wish you have a nice day!

Only the strength of the last Lora in the prompt counts (and it also applies to all the other Lora in the promp)

I was confused why my images are coming out different than from previous versions, but I finally figured out that the strength of the last lora applies to ALL the other loras.

For example if the loras are
<lora:exampleX:1:INALL>, <lora:exampleY:0.5:OUTS>, <lora:exampleZ:0.8:MIDD>
Every lora will have the strength of the lora:exampleZ i,e, the above prompt is the same as
<lora:exampleX:0.8:INALL>, <lora:exampleY:0.8:OUTS>, <lora:exampleZ:0.8:MIDD>

It doesn't matter what loras I use, the last one will always dictate the strength of the others
<lora:exampleZ:1:INALL>, <lora:exampleY:0.5:OUTS>, <lora:exampleX:2:MIDD>
will produce the same result as
<lora:exampleZ:2:INALL>, <lora:exampleY:2:OUTS>, <lora:exampleX:2:MIDD>

Tags are not in list

When trying to XYZ plot with example values from fun usage, getting this error:

Traceback (most recent call last):
  File "D:\GitProjects\stable-diffusion-webui\modules\call_queue.py", line 56, in f
    res = list(func(*args, **kwargs))
  File "D:\GitProjects\stable-diffusion-webui\modules\call_queue.py", line 37, in f
    res = func(*args, **kwargs)
  File "D:\GitProjects\stable-diffusion-webui\modules\txt2img.py", line 53, in txt2img
    processed = modules.scripts.scripts_txt2img.run(p, *args)
  File "D:\GitProjects\stable-diffusion-webui\extensions\sd-webui-lora-block-weight\scripts\lora_block_weight.py", line 815, in newrun
    processed = script.run(p, *script_args)
  File "D:\GitProjects\stable-diffusion-webui\extensions\sd-webui-lora-block-weight\scripts\lora_block_weight.py", line 314, in run
    if "values" in xtype:c_base = weightsdealer(x,y,base)
  File "D:\GitProjects\stable-diffusion-webui\extensions\sd-webui-lora-block-weight\scripts\lora_block_weight.py", line 277, in weightsdealer
    flagger[blockid.index(id)] =changer
ValueError: 'ALL' is not in list

I am putting this NOT,ALL,INS,IND,INALL,MIDD,OUTD,OUTS,OUTALL in box right from "Active" checkbox and in "Y values"

And putting these:

NOT:0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
ALL:1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1
INS:1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0
IND:1,0,0,0,1,1,1,1,0,0,0,0,0,0,0,0,0,0
INALL:1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0
MIDD:1,0,0,0,1,1,1,1,1,1,1,1,1,0,0,0,0,0
OUTD:1,0,0,0,0,0,0,0,0,1,1,1,1,1,0,0,0,0
OUTS:1,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1
OUTALL:1,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1

into Weights setting and clicking "Save presets"

And if i click on "Reload Tags" it erases anything in box, which is right from "Active" checkbox.

If you need any other info, feel free to ask. Ty.

Bug: Extension stops generating images when using hires. fix

First of all thank you for this wonderful extension!
There is a bug that is related to image grid generation that makes this extension stop working when using this extension with AUTOMATIC111's WebUI hires. fix option.
The error is: AssertionError: bad number of horizontal texts: 5; must be 7.
This is probably due to difference in grid parameter processing in WebUI and Lora code.
image
image
image
image

specs as per WebUI:
python: 3.10.7  •  torch: 2.0.0+cu118  •  xformers: N/A  •  gradio: 3.16.2  •  commit: a9fed7c3  •  checkpoint: 89d59c3dde

When I run this without hires. fix, there are no issues.
I have tried changing settings in "Settings" -> "User interface" -> "Show grid in results for web"
and "Saving images/grids" -> "Always save all generated image grids"
to try to make this work without generating grid image but I found that there is still the error. So it seems that trying to skip grid image generation doesn't help. The code encounters an error and stops working, whether the user wants to skip grid generation or not.

There is a similar error here: AUTOMATIC1111/stable-diffusion-webui#6866
But I am not sure why there is still the error.

With my limited python knowledge, it looks like images.py is looking for 7 but Lora code is outputting 5.

The default value of DyLoRA in LyCORIS should be of type None, rather than [0]

Hi,

Regarding the README instructions for using LyCORIS from a1111-sd-webui-lycoris, the recommendation to set DyLoRA to [ :0: ] may not be correct.

README.md
For LyCORIS using a1111-sd-webui-lycoris, syntax is different. :1:1:0:IN02you need to input two value for textencoder and U-net, and :0: for DyLoRA. a1111-sd-webui-lycoris is under under development, so this syntax might be changed.

This is because the initial value of DyLoRA in LyCORIS is of type None, and setting it to a value (e.g. 0) may result in different behavior compared to the default value.

In lycoris.py code, the default value of dyn_dims is None, as specified in line 471.

a1111-sd-webui-lycoris/lycoris.py
...
def load_lycos(names, te_multipliers=None, unet_multipliers=None, dyn_dims=None):
...
lyco.dyn_dim = dyn_dims[i] if dyn_dims else None

Can you please confirm? Thank you!

Incompatibility after update: UnboundLocalError: local variable 'output_shape' referenced before assignment

Webui version: 955df77
Locon/Lycoris extension version: 0224f1ad

Traceback (most recent call last):
  File "F:\stable-diffusion-webui\modules\call_queue.py", line 56, in f
    res = list(func(*args, **kwargs))
  File "F:\stable-diffusion-webui\modules\call_queue.py", line 37, in f
    res = func(*args, **kwargs)
  File "F:\stable-diffusion-webui\modules\txt2img.py", line 56, in txt2img
    processed = process_images(p)
  File "F:\stable-diffusion-webui\modules\processing.py", line 486, in process_images
    res = process_images_inner(p)
  File "F:\stable-diffusion-webui\modules\processing.py", line 625, in process_images_inner
    uc = get_conds_with_caching(prompt_parser.get_learned_conditioning, negative_prompts, p.steps, cached_uc)
  File "F:\stable-diffusion-webui\modules\processing.py", line 570, in get_conds_with_caching
    cache[1] = function(shared.sd_model, required_prompts, steps)
  File "F:\stable-diffusion-webui\modules\prompt_parser.py", line 140, in get_learned_conditioning
    conds = model.get_learned_conditioning(texts)
  File "F:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 669, in get_learned_conditioning
    c = self.cond_stage_model(c)
  File "F:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "F:\stable-diffusion-webui\modules\sd_hijack_clip.py", line 229, in forward
    z = self.process_tokens(tokens, multipliers)
  File "F:\stable-diffusion-webui\modules\sd_hijack_clip.py", line 254, in process_tokens
    z = self.encode_with_transformers(tokens)
  File "F:\stable-diffusion-webui\modules\sd_hijack_clip.py", line 302, in encode_with_transformers
    outputs = self.wrapped.transformer(input_ids=tokens, output_hidden_states=-opts.CLIP_stop_at_last_layers)
  File "F:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "F:\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 811, in forward
    return self.text_model(
  File "F:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "F:\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 721, in forward
    encoder_outputs = self.encoder(
  File "F:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "F:\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 650, in forward
    layer_outputs = encoder_layer(
  File "F:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "F:\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 379, in forward
    hidden_states, attn_weights = self.self_attn(
  File "F:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "F:\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 268, in forward
    query_states = self.q_proj(hidden_states) * self.scale
  File "F:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "F:\stable-diffusion-webui\extensions\a1111-sd-webui-locon\scripts\..\..\..\extensions-builtin/Lora\lora.py", line 305, in lora_Linear_forward
    lora_apply_weights(self)
  File "F:\stable-diffusion-webui\extensions\a1111-sd-webui-locon\scripts\..\..\..\extensions-builtin/Lora\lora.py", line 273, in lora_apply_weights
    self.weight += lora_calc_updown(lora, module, self.weight)
  File "F:\stable-diffusion-webui\extensions\a1111-sd-webui-locon\scripts\main.py", line 564, in lora_calc_updown
    updown = rebuild_weight(module, target)
  File "F:\stable-diffusion-webui\extensions\a1111-sd-webui-locon\scripts\main.py", line 557, in rebuild_weight
    if len(output_shape) == 4:
UnboundLocalError: local variable 'output_shape' referenced before assignment

17 lora block explanation

sorrt in advance as i see no discussion here, so im posting it on issue.

i still cant understand what do these weights mean
so a lora has 17 blocks in it. o..kay?
these 17 blocks, what do they exactly specify?
is it an image divided by 17 area based on its dimension, e.g. 512x512, will be divided into 1 block for all 512x512, and 16 blocks as tiles?
image
or maybe like this
image
or they are not based on dimensions but specification of rgb hue value saturation etc?

seeing the example having many in123 mid123 out1234 only confuses me even more

also what will happen if i turn off / uncheck the blockweight and leave the block weight then generate image?
image
image
image

if all of my hypothesis/inferences are wrong
what do these: all, not, base, in123, mid1234, out12345 mean?

This extension has conflict with "sd-webui-locon"

Hello, I think this extension has conflicted with the "sd-webui-locon"
https://github.com/KohakuBlueleaf/a1111-sd-webui-locon

if you have both installed, and reload your UI, your lora cannot be use/load
the cmd will throw
`RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument mat2 in method wrapper_mm)

unchecking the extensions won't fix the issue until you relaunch your webui

how to block weight ia3 AND LOKR ?

a lokr lycoris, i use lyco:1.5lokrrryu:0.8:0.8 or lyco:1.5lokrrryu:0.8:0.8:LOHA1
LOHA1:1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1,1
03405-288712913-absurdres,incredibly absurdres,masterpiece,best quality,high quality highres _1boy,solo,manly,male focus,  anime style,watercolo
03406-288712913-absurdres,incredibly absurdres,masterpiece,best quality,high quality highres _1boy,solo,manly,male focus,  anime style,watercolo

I dont know,the diffrence is small compared to lora block weight.

lyco:1.5lokrrryu:0.8:0.8:0:LOHA1 will show
lbw(lycomo.loaded_lycos[l],lwei[n],elements[n])
IndexError: list index out of range
i don't use dylora,

Loras not working without presets.

It appears lora weights are not applied properly without using block weight
For example using <lora:artist:1> is not working as intended, but <lora:artist:1:ALL> is working just fine.

Enhancement: ZYX Tag

It would be really nice to have a ZYX tag = 1 - XYZ
basically
XYZ = 1,0.75,0.5,0.25...
ZYX = 0,0.25,0.5,0.75...
Then when running grids, you could "fill in" blocks of LoRA1 with blocks of LoRA2

I am using this extension to search for LoRA merge settings for Supermerger.

LoRAの名前を間違えるもしくは無いLoRAを指定するとエラーが出てすべてのLoRAが無効になる

タイトル通り、所持していないLoRAを使用する記述があると下記エラーが出て、画像は生成されるのですが使用されてるすべてのLoRAが無効になりました。

Couldn't find Lora with name XXX持ってないLoRAの名前XXX
Error running process_batch: D:\data\stable-diffusion-webui\extensions\sd-webui-lora-block-weight\scripts\lora_block_weight.py
Traceback (most recent call last):
File "D:\data\stable-diffusion-webui\modules\scripts.py", line 435, in process_batch
script.process_batch(p, *script_args, **kwargs)
File "D:\data\stable-diffusion-webui\extensions\sd-webui-lora-block-weight\scripts\lora_block_weight.py", line 226, in process_batch
loradealer(self.newprompts ,self.lratios)
File "D:\data\stable-diffusion-webui\extensions\sd-webui-lora-block-weight\scripts\lora_block_weight.py", line 425, in loradealer
if len(lorars) > 0: load_loras_blocks(lorans,lorars,multipliers)
File "D:\data\stable-diffusion-webui\extensions\sd-webui-lora-block-weight\scripts\lora_block_weight.py", line 450, in load_loras_blocks
locallora = lora.load_lora(name, lora_on_disk.filename)
AttributeError: 'NoneType' object has no attribute 'filename'

Enhancement: specify the weight of all 17 blocks directly in the prompt

The final result I ask for:
<lora:loraname:1:1,1,1,1,1,1,1,1,0.5,1,1,1,0.5,1,1,1,1>
This should generate a picture with weights of 0.5 at OUT4 and OUT8 blocks, all other blocks are 1.

long read:
After using XYZ I find that I want to reduce the OUT4 and OUT8 blocks to 0.5

None of the standard tags work for me, so I need to create a new tag, such as CREATIVENAME:1,1,1,1,1,1,1,1,1,0.5,1,1,0.5,1,1,1,1,1

The query string will be
<lora:loraname:1:CREATIVENAME>

The problem is that the CREATIVENAME value may not be saved or may be accidentally changed the next day.

You can give more meaningful names to tags, for example:
NOUT04to05OUT08to05:1,1,1,1,1,1,1,1,0.5,1,1,1,0.5,1,1,1,1

This solves the comprehension problem, but you can still accidentally mess up or erase saved weights.

Then You can make the tag name and its values almost identical, for example:
1_1_1_1_1_1_1_1_0.5_1_1_1_0.5_1_1_1_1:1,1,1,1,1,1,1,1,0.5,1,1,1,0.5,1,1,1,1

The prompt will then say:
<lora:loraname:1:1_1_1_1_1_1_1_1_0.5_1_1_1_0.5_1_1_1_1>

And it will work, will be stored in the EXIF of the finished image. After all, the most important part of the whole issue - the lack of repeatability due to insufficient information in the EXIF

But there is still the inconvenience of having to replace _ with , and vice versa. Why not instead add the ability to specify weights directly in the prompt?

Thank you so much for the wonderful expansion, it opens up amazing possibilities!

Memory Error when running effective Block Analyzer

I wanted to try the Effective Block Analyzer function with no luck.
As soon as I press "Generate" the python process eats all my 32GB of RAM. It stays there a bit and errors out with a Memory Error.
I dont know if this is a normal behavior for this function. So i felt the need to share.

I run a experimental installation of torch: 2.0.0+cu118 but everything else works as expected.

Console output about the error:

Traceback (most recent call last):
File "H:\Stable-Diffusion-WebUI\stable-diffusion-webui\modules\call_queue.py", line 56, in f
res = list(func(*args, **kwargs))
File "H:\Stable-Diffusion-WebUI\stable-diffusion-webui\modules\call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "H:\Stable-Diffusion-WebUI\stable-diffusion-webui\modules\txt2img.py", line 53, in txt2img
processed = modules.scripts.scripts_txt2img.run(p, *args)
File "H:\Stable-Diffusion-WebUI\stable-diffusion-webui\extensions\sd-webui-lora-block-weight\scripts\lora_block_weight.py", line 595, in newrun processed = script.run(p, *script_args)
File "H:\Stable-Diffusion-WebUI\stable-diffusion-webui\extensions\sd-webui-lora-block-weight\scripts\lora_block_weight.py", line 312, in run
zmen = ",".join([str(random.randrange(4294967294)) for x in range(int(ecount))])
File "H:\Stable-Diffusion-WebUI\stable-diffusion-webui\extensions\sd-webui-lora-block-weight\scripts\lora_block_weight.py", line 312, in
zmen = ",".join([str(random.randrange(4294967294)) for x in range(int(ecount))])
MemoryError

[Bug?] List Index out of Range under Certain Conditions

When the lora with block weight preset is not the first lora in the prompt, the following exception is raised:

Error running process_batch: sd-webui-lora-block-weight\scripts\lora_block_weight.py
Traceback (most recent call last):
File "modules\scripts.py", line 395, in process_batch
script.process_batch(p, *script_args, **kwargs)
File "sd-webui-lora-block-weight\scripts\lora_block_weight.py", line 280, in process_batch
loradealer(o_prompts ,self.lratios,self.elementals)
File "sd-webui-lora-block-weight\scripts\lora_block_weight.py", line 496, in loradealer
if len(lorars) > 0: load_loras_blocks(lorans,lorars,multipliers,elements)
File "sd-webui-lora-block-weight\scripts\lora_block_weight.py", line 535, in load_loras_blocks
locallora = lbw(locallora,lwei[i],elements[i])
IndexError: list index out of range

Doesn't work with the latest locon update

New feature added to kohya (LoCon) breaks it

  File "F:\stable-diffusion-webui\modules\call_queue.py", line 56, in f
    res = list(func(*args, **kwargs))
  File "F:\stable-diffusion-webui\modules\call_queue.py", line 37, in f
    res = func(*args, **kwargs)
  File "F:\stable-diffusion-webui\modules\txt2img.py", line 56, in txt2img
    processed = process_images(p)
  File "F:\stable-diffusion-webui\modules\processing.py", line 486, in process_images
    res = process_images_inner(p)
  File "F:\stable-diffusion-webui\modules\processing.py", line 621, in process_images_inner
    uc = get_conds_with_caching(prompt_parser.get_learned_conditioning, negative_prompts, p.steps, cached_uc)
  File "F:\stable-diffusion-webui\modules\processing.py", line 570, in get_conds_with_caching
    cache[1] = function(shared.sd_model, required_prompts, steps)
  File "F:\stable-diffusion-webui\modules\prompt_parser.py", line 140, in get_learned_conditioning
    conds = model.get_learned_conditioning(texts)
  File "F:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 669, in get_learned_conditioning
    c = self.cond_stage_model(c)
  File "F:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "F:\stable-diffusion-webui\modules\sd_hijack_clip.py", line 229, in forward
    z = self.process_tokens(tokens, multipliers)
  File "F:\stable-diffusion-webui\modules\sd_hijack_clip.py", line 254, in process_tokens
    z = self.encode_with_transformers(tokens)
  File "F:\stable-diffusion-webui\modules\sd_hijack_clip.py", line 302, in encode_with_transformers
    outputs = self.wrapped.transformer(input_ids=tokens, output_hidden_states=-opts.CLIP_stop_at_last_layers)
  File "F:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "F:\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 811, in forward
    return self.text_model(
  File "F:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "F:\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 721, in forward
    encoder_outputs = self.encoder(
  File "F:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "F:\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 650, in forward
    layer_outputs = encoder_layer(
  File "F:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "F:\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 379, in forward
    hidden_states, attn_weights = self.self_attn(
  File "F:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "F:\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 268, in forward
    query_states = self.q_proj(hidden_states) * self.scale
  File "F:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "F:\stable-diffusion-webui\extensions-builtin\Lora\lora.py", line 178, in lora_Linear_forward
    return lora_forward(self, input, torch.nn.Linear_forward_before_lora(self, input))
  File "F:\stable-diffusion-webui\extensions\a1111-sd-webui-locon\scripts\main.py", line 271, in lora_forward
    scale = lora_m.multiplier * (module.alpha / module.dim if module.alpha else 1.0)
AttributeError: 'LoraUpDownModule' object has no attribute 'dim'

The plugin suddenly crushed today

These tags suddenly cannot be read, and scripts report an error
屏幕截图 2023-04-14 153808
After tossing and turning for a while, I replaced Lohcon with Lycoris, which resulted in more serious errors
屏幕截图 2023-04-14 153844

Error running the extension?

sorry but after I installed the extension and use it as default setting,it always report error:

Error running process_batch: H:\stable-diffusion-webui-directml\extensions\sd-webui-lora-block-weight\scripts\lora_block_weight.py
Traceback (most recent call last):
File "H:\stable-diffusion-webui-directml\modules\scripts.py", line 395, in process_batch
script.process_batch(p, *script_args, **kwargs)
File "H:\stable-diffusion-webui-directml\extensions\sd-webui-lora-block-weight\scripts\lora_block_weight.py", line 226, in process_batch
loradealer(self.newprompts ,self.lratios)
AttributeError: 'Script' object has no attribute 'newprompts'

neither I write tags as lora:virtualgirlAim_v20:0.7:ALL or lora:virtualgirlAim_v20:0.7:1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1, it's the same result.

Please help ,thanks a lot...

ValueError: 'IN00' is not in list

X:Block ID, BASE,Y: values,0, Z:none,, base:0,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1 (1/130)
locon load lora method
LoRA Block weight: Ralph_222-000007: 0.7 x [0.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]
locon load lora method

Error completing request
Arguments: ('task(1kwi8o2hu3jdb9b)', '222, 1girl, solo,realistic, black hair, black eyes, looking at viewer, white background, simple background, middle hair, portrait, smile, lips, freckles, makeup,lora:Ralph_222-000007:0.7:XYZ, ', 'EasyNegative, paintings, sketches, (worst quality:2), (low quality:2), (normal quality:2), lowres, normal quality, ((monochrome)), ((grayscale)), skin spots, acnes, skin blemishes, age spot, glans,extra fingers,fewer fingers,((watermark:2)),(white letters:1), (multi nipples), lowres, bad anatomy, bad hands, text, error, missing fingers,extra digit, fewer digits, cropped, worst quality, low qualitynormal quality, jpeg artifacts, signature, watermark, username,bad feet, {Multiple people},lowres,bad anatomy,bad hands, text, error, missing fingers,extra digit, fewer digits, cropped, worstquality, low quality, normal quality,jpegartifacts,signature, watermark, blurry,bad feet,cropped,poorly drawn hands,poorly drawn face,mutation,deformed,worst quality,low quality,normal quality,jpeg artifacts,signature,extra fingers,fewer digits,extra limbs,extra arms,extra legs,malformed limbs,fused fingers,too many fingers,long neck,cross-eyed,mutated hands,polar lowres,bad body,bad proportions,gross proportions,text,error,missing fingers,missing arms,missing legs,extra digit,', [], 30, 0, False, False, 1, 1, 7, 1531129687.0, -1.0, 0, 0, 0, False, 768, 512, False, 0.7, 2, 'Latent', 0, 0, 0, [], 0, False, 'MultiDiffusion', False, 10, 10, 1, 64, False, True, 1024, 1024, 96, 96, 48, 1, 'None', 2, False, False, False, False, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, False, False, True, True, 0, 2048, 128, False, '', 0, False, 7, 100, 'Constant', 0, 'Constant', 0, 4, False, False, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, None, 'Refresh models', <scripts.external_code.ControlNetUnit object at 0x00000205B72FD4B0>, <scripts.external_code.ControlNetUnit object at 0x00000205B72FFF10>, <scripts.external_code.ControlNetUnit object at 0x00000205B72FFCD0>, False, '', 0.5, True, False, '', 'Lerp', False, False, 1, 0.15, False, 'OUT', ['OUT'], 5, 0, 'Bilinear', False, 'Pooling Max', False, 'Lerp', '', '', False, 'NONE:0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0\nALL:1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1\nINS:1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0\nIND:1,0,0,0,0,0,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0\nINALL:1,1,1,1,1,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0\nMIDD:1,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,0,0,0,0,0\nOUTD:1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,0,0,0,0,0\nOUTS:1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1\nOUTALL:1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1,1,1\nALL0.5:0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5\nFace_Strong:1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,0,0,0,0,0,0\nFace_weak:1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,0.2,0,0,0,0,0\nMan1:1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,0,0,0,0,0,0\nMan2:1,1,1,1,1,1,1,1,1,1,1,1,1,1,0,0,0,0,0,0,1,1,1,1,1,1\nStyle1:1,1,1,1,1,1,1,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1,0,0\nStyle2:1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0.8,1,1,1,1,1\nStyle3:1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1,1,1\nStyle4:1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1', True, 1, 'Block ID', 'BASE,IN00,IN01,IN02,IN03,IN04,IN05,IN06,IN07,IN08,IN09,IN10,IN11,M00,OUT00,OUT01,OUT02,OUT03,OUT04,OUT05,OUT06,OUT07,OUT08,OUT09,OUT10,OUT11', 'values', '0,0.25,0.5,0.75,1', 'none', '', '0.5,1', 'BASE,IN00,IN01,IN02,IN03,IN04,IN05,IN06,IN07,IN08,IN09,IN10,IN11,M00,OUT00,OUT01,OUT02,OUT03,OUT04,OUT05,OUT06,OUT07,OUT08,OUT09,OUT10,OUT11', 1.0, 'black', '20', False, 'ATTNDEEPON:IN05-OUT05:attn:1\n\nATTNDEEPOFF:IN05-OUT05:attn:0\n\nPROJDEEPOFF:IN05-OUT05:proj:0\n\nXYZ:::1', False, False, False, 3, 0, False, False, 0, False, False, False, False, False, '1:1,1:2,1:2', '0:0,0:0,0:1', '0.2,0.8,0.8', 20, False, False, 'positive', 'comma', 0, False, False, '', 7, 'ALL,INS,IND,INALL,MIDD,OUTD,OUTS,OUTALL', 0, '', 0, '', True, False, False, False, 0, 'Blur First V1', 0.25, 10, 10, 10, 10, 1, False, '', '', 0.5, 1, False, None, False, None, False, None, False, 50) {}
Traceback (most recent call last):
File "J:\AI\novelai-webui\novelai-webui-aki-v3\modules\call_queue.py", line 56, in f
res = list(func(*args, **kwargs))
File "J:\AI\novelai-webui\novelai-webui-aki-v3\modules\call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "J:\AI\novelai-webui\novelai-webui-aki-v3\modules\txt2img.py", line 53, in txt2img
processed = modules.scripts.scripts_txt2img.run(p, *args)
File "J:\AI\novelai-webui\novelai-webui-aki-v3\extensions\sd-webui-lora-block-weight\scripts\lora_block_weight.py", line 595, in newrun
processed = script.run(p, *script_args)
File "J:\AI\novelai-webui\novelai-webui-aki-v3\extensions\sd-webui-lora-block-weight\scripts\lora_block_weight.py", line 402, in run
if "values" in ytype:c_base = weightsdealer(y,x,base)
File "J:\AI\novelai-webui\novelai-webui-aki-v3\extensions\sd-webui-lora-block-weight\scripts\lora_block_weight.py", line 364, in weightsdealer
flagger[blockid.index(id)] =changer
ValueError: 'IN00' is not in list

要望 プリセットファイルをコメント可能にしてほしい

既に可能であれば方式の明記、不可能であれば実装いただきたいです。
メモ書きを書いておかないとどういう機能だったかが覚えてられないためです。

読み込み部分をちょっといじるだけだと思うので自前でやれるか試してはみます。

UnboundLocalError: local variable 'xst' referenced before assignment

Since the newest update, when doing a simple x plot with the original weight, when the plot finishes and it should display the grid, i get this error. this worked before the update.

I would also like to ask a question: is it possible to plot the strength value together with the original block weight ? like having for example: NONE,ALL,INS,IND,INALL,MIDD,OUTD,OUTS,OUTALL,ALL0.5 on x,
and different strength values on y, from 0 to 1 for example. Thank you for this extension.

File "C:\Users\Computer\webui\modules\call_queue.py", line 56, in f
res = list(func(*args, **kwargs))
File "C:\Users\Computer\webui\modules\call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "C:\Users\Computer\webui\modules\txt2img.py", line 53, in txt2img
processed = modules.scripts.scripts_txt2img.run(p, *args)
File "C:\Users\Computer\webui\extensions\sd-webui-lora-block-weight\scripts\lora_block_weight.py", line 605, in newrun
processed = script.run(p, *script_args)
File "C:\Users\Computer\webui\extensions\sd-webui-lora-block-weight\scripts\lora_block_weight.py", line 445, in run
grids.append(smakegrid(images,xst,yst,origin,p))
UnboundLocalError: local variable 'xst' referenced before assignment
image (15)

'int' object has no attribute 'startswith' What's the problem?

LoRA Block weight :meruTheSuccubus_v116000: [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]
LoRA Block weight :koreanDollLikeness_v15: [1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.8, 1.0, 1.0, 0.2, 0.0, 0.0, 0.0, 0.0, 0.0]
Error running process: D:\WORK\ai\stable-diffusion-webui\stable-diffusion-webui_23-02-27\extensions\sd-webui-lora-block-weight\scripts\lora_block_weight.py
Traceback (most recent call last):
File "D:\WORK\ai\stable-diffusion-webui\stable-diffusion-webui_23-02-27\modules\scripts.py", line 409, in process
script.process(p, *script_args)
File "D:\WORK\ai\stable-diffusion-webui\stable-diffusion-webui_23-02-27\extensions\sd-webui-lora-block-weight\scripts\lora_block_weight.py", line 217, in process
loradealer(p,lratios)
File "D:\WORK\ai\stable-diffusion-webui\stable-diffusion-webui_23-02-27\extensions\sd-webui-lora-block-weight\scripts\lora_block_weight.py", line 413, in loradealer
if len(lorars) > 0: load_loras_blocks(lorans,lorars,multiple)
File "D:\WORK\ai\stable-diffusion-webui\stable-diffusion-webui_23-02-27\extensions\sd-webui-lora-block-weight\scripts\lora_block_weight.py", line 783, in load_loras_blocks
locallora = load_lora(name, lora_on_disk.filename,lwei[i])
File "D:\WORK\ai\stable-diffusion-webui\stable-diffusion-webui_23-02-27\extensions\sd-webui-lora-block-weight\scripts\lora_block_weight.py", line 611, in load_lora
sd = sd_models.read_state_dict(filename)
File "D:\WORK\ai\stable-diffusion-webui\stable-diffusion-webui_23-02-27\modules\sd_models.py", line 248, in read_state_dict
sd = get_state_dict_from_checkpoint(pl_sd)
File "D:\WORK\ai\stable-diffusion-webui\stable-diffusion-webui_23-02-27\modules\sd_models.py", line 202, in get_state_dict_from_checkpoint
new_key = transform_checkpoint_dict_key(k)
File "D:\WORK\ai\stable-diffusion-webui\stable-diffusion-webui_23-02-27\modules\sd_models.py", line 190, in transform_checkpoint_dict_key
if k.startswith(text):
AttributeError: 'int' object has no attribute 'startswith'

activating extra network lora with arguments [<modules.extra_networks.ExtraNetworkParams object at 0x000002B0FEDA2E30>, <modules.extra_networks.ExtraNetworkParams object at 0x000002B0FEDA2F80>]: AttributeError
Traceback (most recent call last):
File "D:\WORK\ai\stable-diffusion-webui\stable-diffusion-webui_23-02-27\modules\extra_networks.py", line 75, in activate
extra_network.activate(p, extra_network_args)
File "D:\WORK\ai\stable-diffusion-webui\stable-diffusion-webui_23-02-27\extensions-builtin\Lora\extra_networks_lora.py", line 23, in activate
lora.load_loras(names, multipliers)
File "D:\WORK\ai\stable-diffusion-webui\stable-diffusion-webui_23-02-27\extensions-builtin\Lora\lora.py", line 170, in load_loras
lora = load_lora(name, lora_on_disk.filename)
File "D:\WORK\ai\stable-diffusion-webui\stable-diffusion-webui_23-02-27\extensions\a1111-sd-webui-locon\scripts\main.py", line 273, in load_lora
sd = sd_models.read_state_dict(filename)
File "D:\WORK\ai\stable-diffusion-webui\stable-diffusion-webui_23-02-27\modules\sd_models.py", line 248, in read_state_dict
sd = get_state_dict_from_checkpoint(pl_sd)
File "D:\WORK\ai\stable-diffusion-webui\stable-diffusion-webui_23-02-27\modules\sd_models.py", line 202, in get_state_dict_from_checkpoint
new_key = transform_checkpoint_dict_key(k)
File "D:\WORK\ai\stable-diffusion-webui\stable-diffusion-webui_23-02-27\modules\sd_models.py", line 190, in transform_checkpoint_dict_key
if k.startswith(text):
AttributeError: 'int' object has no attribute 'startswith'

I got an error when doing Reload Presets in Elemental

If I do Reload Presets without entering anything additional, I get an error.
The same result occurs when I enter a new preset and click Save Presets, then Reload Presets.
Also, it seems that I have to press Shift+Enter if I want to start a new line in the Elemental preset entry field. Sorry if this is the spec.

何も追加で入力せずにReload Presetsをするとエラーが出ました。
新しいプリセットを入力してSave Presets、Reload Presetsの順番でクリックしても同じ結果です。
また、Elementalのプリセット入力欄で改行をしようと思ったらShift+Enterを押さなければならないようです。仕様なら申し訳ありません。

M1MacbookPro GoogleChrome
python: 3.10.9  •  torch: 2.1.0.dev20230323  •  xformers: N/A  •  gradio: 3.23.0  •  commit: [22bcc7be]

Reload Presetsを押すと出るエラー
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/gradio/routes.py", line 394, in run_predict
output = await app.get_blocks().process_api(
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/gradio/blocks.py", line 1075, in process_api
result = await self.call_function(
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/gradio/blocks.py", line 884, in call_function
prediction = await anyio.to_thread.run_sync(
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/anyio/to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 867, in run
result = context.run(func, *args)
TypeError: Script.ui..reloadpresets() takes 0 positional arguments but 1 was given

Save Presetsを押すと出るエラー
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/gradio/routes.py", line 394, in run_predict
output = await app.get_blocks().process_api(
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/gradio/blocks.py", line 1075, in process_api
result = await self.call_function(
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/gradio/blocks.py", line 884, in call_function
prediction = await anyio.to_thread.run_sync(
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/anyio/to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 867, in run
result = context.run(func, *args)
TypeError: Script.ui..savepresets() takes 1 positional argument but 2 were given

When installed as extension, txt file cannot be used

This is a great extension. I have few infos/guide on what lora blocks are doing, there is more independent research that has happened for unet.

Maybe we can use this like an extension instead? the paths are set currently to look in custom scripts.

Or maybe, Is this LoRA block merging feature possible to be added in https://github.com/hako-mikan/sd-webui-supermerger?

(I'm sorry if maybe it's already added. It looked like its only for extracting lora's from models, choosing block weights, and saving as a file.)

XYZ plot does not work

The default XYZplot works, but this one does not.
Generate will generate one piece and that's it.
I am trying with the latest updated a1111.

Memory leak

Every time a image is generated with a lora a bit of memory leaks, eventually you get a out of memory error
With extension disabled (steady memory usage):
image
image

With extension enabled (memory usage increases every time after you hit generate):
image
image

To easily reproduce this you can set the resolution to 64x64, 1 step, and add a lora to the prompt, the bigger the lora the faster you will notice it.

Draw grid error

When trying to do Effective Block Analyzer getting this error:

Traceback (most recent call last):
  File "D:\GitProjects\stable-diffusion-webui\modules\call_queue.py", line 56, in f
    res = list(func(*args, **kwargs))
  File "D:\GitProjects\stable-diffusion-webui\modules\call_queue.py", line 37, in f
    res = func(*args, **kwargs)
  File "D:\GitProjects\stable-diffusion-webui\modules\txt2img.py", line 53, in txt2img
    processed = modules.scripts.scripts_txt2img.run(p, *args)
  File "D:\GitProjects\stable-diffusion-webui\extensions\sd-webui-lora-block-weight\scripts\lora_block_weight.py", line 815, in newrun
    processed = script.run(p, *script_args)
  File "D:\GitProjects\stable-diffusion-webui\extensions\sd-webui-lora-block-weight\scripts\lora_block_weight.py", line 343, in run
    grids.append(smakegrid(images,xs,ys,origin,p))
  File "D:\GitProjects\stable-diffusion-webui\extensions\sd-webui-lora-block-weight\scripts\lora_block_weight.py", line 771, in smakegrid
    grid = images.draw_grid_annotations(grid,int(p.width), int(p.height), hor_texts, ver_texts)
  File "D:\GitProjects\stable-diffusion-webui\modules\images.py", line 177, in draw_grid_annotations
    assert cols == len(hor_texts), f'bad number of horizontal texts: {len(hor_texts)}; must be {cols}'
AssertionError: bad number of horizontal texts: 3; must be 6

If you need any other info, feel free to ask. Ty.

setting weights does not appear to work with API

despite the prompts sent POST request in the api being identical to the one that is being is used in the webui, the latter writes a log in the cmd window saying that (LoRA Block weight :(lora name): [(weights here)] while the former does not, or, more notably, the end results of the 2 generations are very different despite other parameters being identical.
I would assume that this is caused by the tokenizer not properly parsing the weight list(i.e. the [1,1,1,1,1,0,0,0] or whatever) when the prompt is loaded from the API, and only takes effect when the prompts are loaded from the web ui.
Is there any way to work around this? If not, do you plan on making this possible?

Suggestion: XYZ Plot for img2img

Hello!

Would it be possible to add the XYZ functionality for img2img? It currently doesn't seem to be working, but returns:

Traceback (most recent call last):
  File "C:\stable-diffusion-webui\stable-diffusion-webui\modules\scripts.py", line 386, in process
    script.process(p, *script_args)
  File "C:\stable-diffusion-webui\stable-diffusion-webui\extensions\sd-webui-lora-block-weight\scripts\lora_block_weight.py", line 177, in process
    lratios["ZYX"] = lzyx
NameError: name 'lzyx' is not defined

TypeError: gradio.components.Textbox.__init__() got multiple values for keyword argument 'lines'

Error calling: /root/autodl-tmp/stable-diffusion-webui/extensions/sd-webui-lora-block-weight/scripts/lora_block_weight.py/ui
Traceback (most recent call last):
File "/root/autodl-tmp/stable-diffusion-webui/modules/scripts.py", line 262, in wrap_call
res = func(*args, **kwargs)
File "/root/autodl-tmp/stable-diffusion-webui/extensions/sd-webui-lora-block-weight/scripts/lora_block_weight.py", line 135, in ui
bw_ratiotags= gr.TextArea(label="",lines=2,value=ratiostags,visible =True,interactive =True,elem_id="lbw_ratios")
File "/root/miniconda3/envs/xl_env/lib/python3.10/site-packages/gradio/templates.py", line 23, in init
super().init(lines=7, **kwargs)
TypeError: gradio.components.Textbox.init() got multiple values for keyword argument 'lines'

Error calling: /root/autodl-tmp/stable-diffusion-webui/extensions/sd-webui-lora-block-weight/scripts/lora_block_weight.py/ui
Traceback (most recent call last):
File "/root/autodl-tmp/stable-diffusion-webui/modules/scripts.py", line 262, in wrap_call
res = func(*args, **kwargs)
File "/root/autodl-tmp/stable-diffusion-webui/extensions/sd-webui-lora-block-weight/scripts/lora_block_weight.py", line 135, in ui
bw_ratiotags= gr.TextArea(label="",lines=2,value=ratiostags,visible =True,interactive =True,elem_id="lbw_ratios")
File "/root/miniconda3/envs/xl_env/lib/python3.10/site-packages/gradio/templates.py", line 23, in init
super().init(lines=7, **kwargs)
TypeError: gradio.components.Textbox.init() got multiple values for keyword argument 'lines'

Batch countを2以上いれて生成した際に、生成結果がじょじょに乱れる

私の環境だけかもしれませんが、block weightを用いた指定をした状態で、batch countを2以上に設定した際に、batchが進むに連れて生成結果がクシャクシャになっていきます。
1枚目の画像は正常に作成されますが、それ以上の画像がどんどん悪くなるという状況です。

block weightを適用しない状態では特に問題は起きなかったので、何らかのバグかと思い報告しました。

Printed Weights Bug

<lora:xxxx:0.5:ALL> and <lora:xxxx:1:ALL0.5>

print the same weights to the console [0.5,0.5......]

But 0.5:ALL actually applies at full weight, which had me chasing ghosts for ages.

RuntimeError: mat1 and mat2 shapes cannot be multiplied

Hi, I was trying to play with the lora block but had this error. not sure what's the reason.

base model: sd v1-5-pruned-emaonly.ckpt [cc6cb27103]
lora:lora:Moxin_10

any help would be appreciated, thanks!

Loading weights [e1441589a6] from /root/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned.ckpt
Loading VAE weights specified in settings: /root/stable-diffusion-webui/models/VAE/vae-ft-mse-840000-ema-pruned.safetensors
Applying xformers cross attention optimization.
Weights loaded in 4.8s (load weights from disk: 3.7s, apply weights to model: 0.4s, move model to device: 0.6s).
LoRA Block weight: Moxin_10: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
Error completing request
Arguments: ('task(g3376f1vixk2y3g)', 'ultra high res, best quality, photo, 4k, (photorealistic:1.4), (8k, best quality, masterpiece:1.2), (realistic, photo-realistic:1.37), ultra-detailed, 1 girl, cute, solo, (nose blush),(smile:1.15),(closed mouth), beautiful detailed eyes, (long hair:1.2), (elegant pose), (Smile), solo, plaid, plaid_skirt, skirt, <lora:Moxin_10:1:NONE>\n', 'nsfw, paintings, sketches, (worst quality:2), (low quality:2), (normal quality:2), lowres, normal quality, ((monochrome)), ((grayscale)), skin spots, acnes, skin blemishes, age spot, (outdoor:1.6), manboobs, (backlight:1.2), double navel, mutad arms, hused arms, neck lace, analog, analog effects, letters, less fingers, extra fingers, paintings, sketches, (worst quality:2), (low quality:2), (normal quality:2), lowres, normal quality, ((monochrome)), ((grayscale)), skin spots, acnes, skin blemishes, age spot, (outdoor:1.6), manboobs, backlight,(ugly:1.331), (duplicate:1.331), (morbid:1.21), (mutilated:1.21), (tranny:1.331), mutated hands, (poorly drawn hands:1.331), blurry, (bad anatomy:1.21), (bad proportions:1.331), extra limbs, (disfigured:1.331), (more than 2 nipples:1.331), (missing arms:1.331), (extra legs:1.331), (fused fingers:1.61051), (too many fingers:1.61051), (unclear eyes:1.331), bad hands, missing fingers, extra digit, (futa:1.1), bad body, NG_DeepNegative_V1_75T,pubic hair, glans', [], 20, 0, False, False, 1, 1, 7, 1204502509.0, -1.0, 0, 0, 0, False, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, [], 0, <scripts.external_code.ControlNetUnit object at 0x7f4ed0b98250>, <scripts.external_code.ControlNetUnit object at 0x7f4ed0b984f0>, 'NONE:0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0\nALL:1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1\nINS:1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0\nIND:1,0,0,0,1,1,1,0,0,0,0,0,0,0,0,0,0\nINALL:1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0\nMIDD:1,0,0,0,1,1,1,1,1,1,1,1,0,0,0,0,0\nOUTD:1,0,0,0,0,0,0,0,1,1,1,1,0,0,0,0,0\nOUTS:1,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1\nOUTALL:1,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1\nALL0.5:0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', True, 0, 'values', '0,0.25,0.5,0.75,1', 'Block ID', 'IN05-OUT05', 'none', '', '0.5,1', 'BASE,IN00,IN01,IN02,IN03,IN04,IN05,IN06,IN07,IN08,IN09,IN10,IN11,M00,OUT00,OUT01,OUT02,OUT03,OUT04,OUT05,OUT06,OUT07,OUT08,OUT09,OUT10,OUT11', 'black', '20', False, False, False, 'positive', 'comma', 0, False, False, '', 1, '', 0, '', 0, '', True, False, False, False, 0, None, False, None, False, 50) {}
Traceback (most recent call last):
  File "/root/stable-diffusion-webui/modules/call_queue.py", line 56, in f
    res = list(func(*args, **kwargs))
  File "/root/stable-diffusion-webui/modules/call_queue.py", line 37, in f
    res = func(*args, **kwargs)
  File "/root/stable-diffusion-webui/modules/txt2img.py", line 56, in txt2img
    processed = process_images(p)
  File "/root/stable-diffusion-webui/modules/processing.py", line 486, in process_images
    res = process_images_inner(p)
  File "/root/stable-diffusion-webui/modules/processing.py", line 625, in process_images_inner
    uc = get_conds_with_caching(prompt_parser.get_learned_conditioning, negative_prompts, p.steps, cached_uc)
  File "/root/stable-diffusion-webui/modules/processing.py", line 570, in get_conds_with_caching
    cache[1] = function(shared.sd_model, required_prompts, steps)
  File "/root/stable-diffusion-webui/modules/prompt_parser.py", line 140, in get_learned_conditioning
    conds = model.get_learned_conditioning(texts)
  File "/root/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 669, in get_learned_conditioning
    c = self.cond_stage_model(c)
  File "/root/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/root/stable-diffusion-webui/modules/sd_hijack_clip.py", line 229, in forward
    z = self.process_tokens(tokens, multipliers)
  File "/root/stable-diffusion-webui/modules/sd_hijack_clip.py", line 254, in process_tokens
    z = self.encode_with_transformers(tokens)
  File "/root/stable-diffusion-webui/modules/sd_hijack_clip.py", line 302, in encode_with_transformers
    outputs = self.wrapped.transformer(input_ids=tokens, output_hidden_states=-opts.CLIP_stop_at_last_layers)
  File "/root/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/root/miniconda3/lib/python3.8/site-packages/transformers/models/clip/modeling_clip.py", line 811, in forward
    return self.text_model(
  File "/root/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/root/miniconda3/lib/python3.8/site-packages/transformers/models/clip/modeling_clip.py", line 721, in forward
    encoder_outputs = self.encoder(
  File "/root/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/root/miniconda3/lib/python3.8/site-packages/transformers/models/clip/modeling_clip.py", line 650, in forward
    layer_outputs = encoder_layer(
  File "/root/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/root/miniconda3/lib/python3.8/site-packages/transformers/models/clip/modeling_clip.py", line 389, in forward
    hidden_states = self.mlp(hidden_states)
  File "/root/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/root/miniconda3/lib/python3.8/site-packages/transformers/models/clip/modeling_clip.py", line 344, in forward
    hidden_states = self.fc1(hidden_states)
  File "/root/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/root/stable-diffusion-webui/extensions-builtin/Lora/lora.py", line 197, in lora_Linear_forward
    return lora_forward(self, input, torch.nn.Linear_forward_before_lora(self, input))
  File "/root/stable-diffusion-webui/extensions-builtin/Lora/lora.py", line 191, in lora_forward
    res = res + module.up(module.down(input)) * lora.multiplier * (module.alpha / module.up.weight.shape[1] if module.alpha else 1.0)
  File "/root/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/root/stable-diffusion-webui/extensions/sd-webui-lora-block-weight/scripts/lora_block_weight.py", line 513, in forward
    return self.func(x)
  File "/root/stable-diffusion-webui/extensions/sd-webui-lora-block-weight/scripts/lora_block_weight.py", line 545, in inference
    return self.up_model(self.down_model(x))
  File "/root/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/root/stable-diffusion-webui/extensions-builtin/Lora/lora.py", line 197, in lora_Linear_forward
    return lora_forward(self, input, torch.nn.Linear_forward_before_lora(self, input))
  File "/root/miniconda3/lib/python3.8/site-packages/torch/nn/modules/linear.py", line 114, in forward
    return F.linear(input, self.weight, self.bias)
RuntimeError: mat1 and mat2 shapes cannot be multiplied (77x3072 and 768x128)

IndexError: list index out of range

No matter how I adjust the position, I will get an error

Error running process_batch: D:\AI\Stable Diffusion\extensions\sd-webui-lora-block-weight\scripts\lora_block_weight.py
Traceback (most recent call last):
File "D:\AI\Stable Diffusion\modules\scripts.py", line 435, in process_batch
script.process_batch(p, *script_args, **kwargs)
File "D:\AI\Stable Diffusion\extensions\sd-webui-lora-block-weight\scripts\lora_block_weight.py", line 280, in process_batch
loradealer(o_prompts ,self.lratios,self.elementals)
File "D:\AI\Stable Diffusion\extensions\sd-webui-lora-block-weight\scripts\lora_block_weight.py", line 503, in loradealer
if len(lorars) > 0: load_loras_blocks(lorans,lorars,multipliers,elements,ltype)
File "D:\AI\Stable Diffusion\extensions\sd-webui-lora-block-weight\scripts\lora_block_weight.py", line 529, in load_loras_blocks
lbw(lora.loaded_loras[l],lwei[n],elements[n])
IndexError: list index out of range

I am calling the api interface of webui, how should I add your function in the script

I am calling the api interface of webui, how should I add your function in the script.

import requests
import cv2
from base64 import b64encode

def readImage(path):
img = cv2.imread(path)
retval, buffer = cv2.imencode('.jpg', img)
b64img = b64encode(buffer).decode("utf-8")
return b64img

b64img = readImage(r"C:\Users\Administrator\Desktop\test\demo1.jpg")
class controlnetRequest():
def init(self, prompt):
self.url = "http://localhost:7860/sdapi/v1/txt2img"
self.body = {
"prompt": "lora:akemiTakada1980sStyle_1:0.75:OUTALL,takada akemi, 1980s (style),painting (medium), retro artstylewatercolor,looking at viewer,solo,upbody,zz00,eyes_zz00,hair_zz00,(medium)1girl,woman,famale,skyzz00,lora:zz00:0.6:MIDD,portrait_zz00,lips_zz00,",
"negative_prompt": "(painting by bad-artist-anime:0.9), (painting by bad-artist:0.9), watermark, text, error, blurry, jpeg artifacts, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, artist name, (worst quality, low quality:1.7), bad anatomy,",
"seed": -1,
"subseed": -1,
"subseed_strength": 0,
"batch_size": 1,
"n_iter": 1,
"steps": 20,
"cfg_scale": 7,
"width": 512,
"height": 512,
"restore_faces": True,
"eta": 0,
"sampler_index": "DPM++ 2M Karras",
"alwayson_scripts": {
"LoRA Block Weight":{
"Active": True,
},
"ControlNet": {
"args": [
{
"enabled": False,
"input_image": [b64img],
"module": 'softedge_hed',
"model": 'control_v11p_sd15_softedge [a8575a2a]'
},
{
"enabled": False,
"input_image": "",
"module": "",
"model": ""
}
]
}
}
}

def sendRequest(self):
    r = requests.post(self.url, json=self.body)
    response = r.json()
    print(response)
    return response

js = controlnetRequest("walter white").sendRequest()
print(js)

import io, base64
from PIL import Image

#pil_img = Image.open(r"C:\Users\Administrator\Desktop\test\demo1.jpg")
image1 = Image.open(io.BytesIO(base64.b64decode(js["images"][0])))
#image2 = Image.open(io.BytesIO(base64.b64decode(js["images"][1])))
print(image1)
#print(image2)

将 Pillow 图像对象保存为 JPEG 文件

image1.save(r"C:\Users\Administrator\Desktop\test\test1.jpg", format='JPEG')
#image2.save(r"C:\Users\Administrator\Desktop\test\test2.jpg", format='JPEG')
image1.show()
#image2.show()

This code will report an error:
Error running process: D:\novelai-webui-aki-v3\extensions\sd-webui-lora-block-weight\scripts\lora_block_weight.py
Traceback (most recent call last):
File "D:\novelai-webui-aki-v3\modules\scripts.py", line 417, in process
script.process(p, *script_args)
File "D:\novelai-webui-aki-v3\extensions\sd-webui-lora-block-weight\scripts\lora_block_weight.py", line 208, in process
loraratios=loraratios.splitlines()
AttributeError: 'dict' object has no attribute 'splitlines'

readme Error? in number of weight inputs of LyCORIS

In readme line 37,
<lora:"lora name":1:0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0>. (LyCORIS, etc.)
you have entered 30 weights, but it should be 26 enter.
The same error exists in the Japanese version.

Feature Request - Inherit Special Character

I've been playing around with this for a little while now and I think I am getting the hang of the syntax. One low hanging fruits I think would be great to implement is a variable syntax that inherits the original weight in the prompt and replaces it over a special character.

For example: <lora:my_lora:0.8:DEMO> where DEMO:X,0,0,0,0,0,0,0,X,X,X,X,0,0,0,0,0, this would take the 0.8 from the prompt and replace it over the X (or whatever you choose) becoming 0.8,0,0,0,0,0,0,0,0.8,0.8,0.8,0.8,0,0,0,0,0 at runtime.

This saves making multiple permutations of the same structure but with only the weight changed identically over the whole array.

Additionally, I thought you could perhaps take this a step further and allow an offset to the replace syntax.

For example: <lora:my_lora:0.8:DEMO> where DEMO:X,0,0,0,0,0,0,0,X,X+0.1,X+0.2,X+0.1,0,0,0,0,0, this would take the 0.8 from the prompt as previously suggested, but also apply any + or - offsets placed next to them. So the weights would become 0.8,0,0,0,0,0,0,0,0.8,0.9,1,0.9,0,0,0,0,0 at runtime.

What do you think?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.