shepnerd / inpainting_gmcnn Goto Github PK
View Code? Open in Web Editor NEWImage Inpainting via Generative Multi-column Convolutional Neural Networks, NeurIPS2018
License: MIT License
Image Inpainting via Generative Multi-column Convolutional Neural Networks, NeurIPS2018
License: MIT License
Its easy to implement spatial Variant reconstruction loss on the rectangle masks.
Did you implement spatial Variant reconstruction loss on irregular masks?
i just use 20 images for experiment .and train.txt includes 20 images path like /dataset/celeba_256X256/000001.jpg .....dataset/celeba_256X256/0000020.jpg
python train.py --dataset [./dataset/celeba_256X256] --data_file [./dataset/train.txt] --gpu_ids [0] --pretrain_network 1 --batch_size 8
and the problem Traceback (most recent call last):
File "train.py", line 11, in
config = TrainOptions().parse()
File "/media//inpainting_gmcnn-master/pytorch/options/train_options.py", line 74, in parse
id = int(str_id)
ValueError: invalid literal for int() with base 10: '[0]'
by the way ,i just have one gpu 1060
can you give me your dataset(celeba-hq) used for train the model celeba-hq-256.my email is [email protected] ,thanks again.
@shepnerd Thanks for your great work. However, when I want to train on 512 size celebahq images, the results are bad. Which parameters need to be modified? I just modified the following parameters:
img_shapes=256
mask_shapes=128
g_cnum=64
And the results are bad.
Hello, is there a tensorflow version of the code for the quantitative analysis part of the paper?
I notice that you use different strategies to upsample the components. Bilinear is for 7x7 branch, deconvolution is for 3x3 branch and both are used for 5x5 branch. Is there any difference between bilinear and deconvolution? Why do you arrange upsample strategy like this?
Thank you!
I tried testing your code on masks of large sizes and the output is quite bad. Any idea for some workaround that can be done for this?
So how do I carry on a previously stopped training? It seems that if I try to load a previous model it starts with a new folder.
Following is the full stack trace of the PyTorch version...
Loading pretrained model from checkpoints/paris-streetview_256x256_rect
Traceback (most recent call last):
File "test.py", line 32, in
ourModel.load_networks(getLatest(os.path.join(config.load_model_dir, '*.pth')))
File "C:\Projects\inpainting_gmcnn\pytorch\util\utils.py", line 84, in getLatest
return files[sorted(range(len(file_times)), key=lambda x: file_times[x])[-1]]
IndexError: list index out of range
I want to do a repair training, I want to use Paris streetview, but it is not public, can you send the link of this dataset to me? Thank you very much, my email address : [email protected]
I pretrained the model with my own data about 10 epochs. and the result does not converge. then i want to try finetune step.but failed. any one can help me? thanks. @shepnerd (my image size is 512X512, about 1000 pics in trainingset)
RuntimeError: size mismatch, m1: [4 x 4096], m2: [16384 x 1] at C:/w/1/s/tmp_conda_3.7_055457/conda/conda-bld/pytorch_1565416617654/work/aten/src\THC/generic/THCTens
orMathBlas.cu:273
rt
Hi @shepnerd, thanks for the interesting work!
I have a hard time understanding the logic of discriminator. Could you please explain the motivation for the value range of discriminator outputs and respective loss?
Here is my understanding:
The step of evaluation of 'realism' for inpainted and ground truth images is (here, excluding the local terms for brevity):
self.completed_logit, _ = self.netD(self.completed.detach(), self.completed_local.detach())
self.gt_logit, _ = self.netD(self.gt, self.gt_local)
where self.completed_logit
and self.gt_logit
are outputs from the last linear layer of the discriminator and these contain a single value for each image in the batch.
So the discriminator does mapping [b, c, h, w] --> [b, 1]
. This part is clear, except for the range of output values which might be arbitrary due to the linear activation in the last layer of discriminator. So it might be -3, 5, 0.3, etc.
The loss term is defined as follows:
self.D_loss = nn.ReLU()(1.0 - self.gt_logit).mean() + nn.ReLU()(1.0 + self.completed_logit).mean()
As I understand, a perfect discriminator should assign anything above 1 to real image, self.gt_logit
(identifies the ground truth as real) and anything less than -1 to fake image, self.completed_logit
(identifies the inpainted image as fake).
Is that correct? If so, could you explain what is the practical difference in using these compared to regular 0 and 1 adversarial ground truths like here?
Thanks!
@shepnerd thanks for your excellent work!I notice you release the stroke model on places2,it works well on stoke mask。 but when I finetune the model with rect mask from your released model, the test result often occurs heavily artifacts。for example:
is it a normal result with rect mask on places2? Can you release your trained model on places2 with rect mask?
@shepnerd Thank you for your reply. I am very sorry, I made some mistakes in the previous issue. When training on 512-size celeba-hq images. I modified the following parameters in train_options.py:
self.parser.add_argument('--img_shapes', type=str, default='512,512,3',
help='given shape parameters: h,w,c or h,w')
self.parser.add_argument('--mask_shapes', type=str, default='256,256',
help='given mask parameters: h,w')
self.parser.add_argument('--g_cnum', type=int, default=64,
help='# of generator filters in first conv layer')
But it still not work. Are there any other parameters that need to be modified?
Hi,
I tested the pre trained model, and results were amazing,
Can please please tell us how to re-train the model with custom image data set.?
What should be resolution of images.?
And is there any pre processing needed on images before keeping them in training
Thanks @shepnerd
Hey @shepnerd
How can I test your code on my own mask ?
Hello~
I used the metrics.py of paper"Edge-connect:……" to calculate PSNR and SSIM on validation set of celeba-HQ-256, but the result is different from yours especially on SSIM, so could u share the file used for calculating PSNR and SSIM?
And i want to make clear somethings which need your help:
When i make celeba-HQ-256 dataset from celeba, there is a .zip about delta ,when i used it,there will be noise in the photos,like colorful points in the black area, if not, the photos are clean, how about the celeba-HQ-256 you used? And what's the format(.jpg or .png)? It would be helpful if you share the celeba-HQ-256 dataset.
THX !~
Hi! I'm trying to test the pre-trained model on CelebA-HQ. I downloaded the checkpoint, put 1 image in the folder and provided a text file with its path. I type this:
python /content/inpainting_gmcnn/tensorflow/test.py --data_file /content/1.txt --load_model_dir '/content/drive/My Drive/Multicolumn' --random_mask 0
but receiving this error:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/ops.py", line 1607, in _create_c_op
c_op = c_api.TF_FinishOperation(op_desc)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Dimension 3 in both shapes must be equal, but are 32 and 64. Shapes are [7,7,5,32] and [7,7,5,64]. for 'Assign' (op: 'Assign') with input shapes: [7,7,5,32], [7,7,5,64].
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/content/inpainting_gmcnn/tensorflow/test.py", line 45, in
vars_list))
File "/content/inpainting_gmcnn/tensorflow/test.py", line 44, in
assign_ops = list(map(lambda x: tf.assign(x, tf.contrib.framework.load_variable(config.load_model_dir, x.name)),
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/ops/state_ops.py", line 227, in assign
validate_shape=validate_shape)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/ops/gen_state_ops.py", line 66, in assign
use_locking=use_locking, name=name)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/op_def_library.py", line 794, in _apply_op_helper
op_def=op_def)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/util/deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/ops.py", line 3357, in create_op
attrs, op_def, compute_device)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/ops.py", line 3426, in _create_op_internal
op_def=op_def)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/ops.py", line 1770, in init
control_input_ops)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/ops.py", line 1610, in _create_c_op
raise ValueError(str(e))
ValueError: Dimension 3 in both shapes must be equal, but are 32 and 64. Shapes are [7,7,5,32] and [7,7,5,64]. for 'Assign' (op: 'Assign') with input shapes: [7,7,5,32], [7,7,5,64].
What may be the problem?
@shepnerd Thanks for your well written code. my question is about the order of parameters of the following function:
inpainting_gmcnn/tensorflow/net/ops.py
Line 472 in fe5295e
From the place this function is invoked:
inpainting_gmcnn/tensorflow/net/ops.py
Line 507 in fe5295e
inpainting_gmcnn/tensorflow/net/ops.py
Line 511 in fe5295e
I can figure out:
feat_A
is the feature computed from predicted datafeat_B
is the feature computed from ground truth dataThen this function invokes mrf_loss
:
mrf_loss(feat_A, feat_B, distance=config.Dist, nnsigma=config.nn_stretch_sigma)
In the definition of mrf_loss
function:
inpainting_gmcnn/tensorflow/net/ops.py
Line 409 in fe5295e
I can figure out:
T_features
should be feature computed from ground truth dataI_features
should be feature computed from predicted dataSo the order of parameters is not right? May be I misunderstand something. Could you help me check this?
python test.py --dataset paris_streetview --data_file imgs/paris-streetview_256x256/ --load_model_dir checkpoints/paris-streetview_256x256_rect/ --random_mask 0 --g_cnum 32
Traceback (most recent call last):
File "test.py", line 37, in
output = model.evaluate(input_image_tf, input_mask_tf, config=config, reuse=reuse)
File "C:\Projects\Image_inpainting\net\network.py", line 300, in evaluate
batch_predict = self.build_generator(im, mask, reuse = reuse)
File "C:\Projects\Image_inpainting\net\network.py", line 37, in build_generator
x = conv_7( x_w_mask, filters = cnum, strides = 1, name = b_names[0] + 'conv1' )
TypeError: init() got multiple values for argument 'filters'
Thanks for your excellent work!I have some doubts about testing on paris_street. When I test the image with your realesed trained weight, I found the result is different from that in your published paper.
I used the following command for testing:
python test.py --dataset paris_streetview
--data_file ./imgs/paris-streetview_256x256/
--load_model_dir ./checkpoints/paris-streetview_256x256_rect/ --random_mask 0
Is there anything I am missing?
Excuse me,I want to run prtrained models of CelebA-HQ-256,but have a error as follows :‘
return files[sorted(range(len(file_times)), key=lambda x: file_times[x])[-1]]
IndexError: list index out of range’ I can't solve it.
Hello!
I'm trying to finetune the model for my data using the model pre-trained on Places2.
For training, I have specified the parameter --load_model_dir to the directory where I have saved the Places2 weights that you shared (Thanks a lot!) and other parameters that were required in the README.
After overnight training, I found that the only output was the folder checkpoints with events.out.tfevents.1576231107.PM1 file.
The files in the Places2 directory were not changed.
What I am doing wrong?
@shepnerd Thank you for the great work. After reading the paper and code, I can not understand the following code quite well. Need your help.
When computing relative similarity between v and s:
inpainting_gmcnn/tensorflow/net/ops.py
Line 330 in fe5295e
def calc_relative_distances(self, axis=3):
epsilon = 1e-5
div = tf.reduce_min(self.raw_distances, axis=axis, keep_dims=True)
relative_dist = self.raw_distances / (div + epsilon)
return relative_dist
In my understanding:
max(r)
in paper is the div
in the code.s
is not equal to value in div
, the code is OK.s
is equal to the value in div
, as the paper said, div
should exclude this value first, then find another suitable value.But the code didn't handle the condition when s
is equal to the value in div
and didn't exclude it. Is this a small bug in the code or something wrong in my understanding ?
Hope I described it clearly.
Hi, I used the pretrained imagenet weights with rect inpainting style to train my dataset (about 4000 images) . here are some hyperparameters: --batch_size 16 --mask_type rect --lr 1e-4 --train_spe 4000 --max_iters 72000
it seems the train is hard to converge. so what should i do next? continue to pretrain? how many iters should i train? thanks. see pics below:
I did not modify any parameters during my training. Why did the ID-MRF loss increase during the second phase of training?
sorry for open this issue, I am a graduate student in image inpainting.
I have searched for paris streetview dataset in many ways, even email to pathak.
I will appreciate it if you can send a link for dataset to [email protected]
I think your work is meaningful, and sincerely thanks.
ValueError: Cannot feed value of shape (1, 512, 512, 3) for Tensor 'Placeholder:0', which has shape '(1, 512, 680, 3)'
I used a 512*512 color picture provided by your project.
I have tested your Tensorflow implementattion code on your already present data set in imgs/places2_256x256 folder. Now I want to test with my own image. My image already has some hole present. It is given in this link
Accordingly I tried modifying your code by eliminating the following portion from your test.py file
if h >= config.img_shapes[0] and w >= config.img_shapes[1]:
h_start = (h-config.img_shapes[0]) // 2
w_start = (w-config.img_shapes[1]) // 2
image = image[h_start: h_start+config.img_shapes[0], w_start: w_start+config.img_shapes[1], :]
else:
t = min(h, w)
image = image[(h-t)//2:(h-t)//2+t, (w-t)//2:(w-t)//2+t, :]
image = cv2.resize(image, (config.img_shapes[1], config.img_shapes[0]))
image = image * (1-mask) + 255 * mask
I want only to fill the holes in the images through your code. I already have images in which holes are present . Kindly say how to do the same?
I can visualize that I need to apply mask on the portion to be refilled and then run the session, but how to do the same?
Here is the portion I guess I need to change but I wonder how?
for i in range(test_num):
if config.mask_type == 'rect':
mask = generate_mask_rect(config.img_shapes, config.mask_shapes, config.random_mask)
else:
mask = generate_mask_stroke(im_size=(config.img_shapes[0], config.img_shapes[1]), parts=8, maxBrushWidth=24, maxLength=100, maxVertex=20)
THANKS in advance. Waiting for your reply.
I followed your instructions on issue #9 by setting learning rate to 1e-5 and by changing "mask_priority = priority_loss_mask(mask)" to "mask_priority = priority_loss_mask(mask, hsize=128, sigma=1.0 / 60, iters=16)" in network.py (L178). See image bellow for my results.
(Top left- pretrain results, Top right- fine tuning at 100 epochs, bottom left- fine tuning at 500 epochs and Bottom right- fine tuning at 1000 epochs.)
Bellow is a screenshot of my training loss.
am I doing anything wrong? are this results normal? #5
Hi, is it possible to convert your trained pytorch or tensorflow model into coreml without implementing custom layer?
Here
inpainting_gmcnn/pytorch/model/net.py
Line 258 in 512acf3
$ python3 test.py --dataset paris_streetview --data_file ./imgs/paris-streetview_256x256/ --load_model_dir ./checkpoints/paris-streetview_256x256_rect --random_mask 0
/home/slothdemon/.local/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/home/slothdemon/.local/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/home/slothdemon/.local/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
/home/slothdemon/.local/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/home/slothdemon/.local/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
/home/slothdemon/.local/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
/home/slothdemon/.local/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/home/slothdemon/.local/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/home/slothdemon/.local/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
/home/slothdemon/.local/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/home/slothdemon/.local/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
/home/slothdemon/.local/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
Traceback (most recent call last):
File "test.py", line 12, in
"nvidia-smi -q -d Memory | grep -A4 GPU | grep Free", shell=True, stdout=subprocess.PIPE).stdout.readlines()]
File "<array_function internals>", line 6, in argmax
File "/home/slothdemon/.local/lib/python3.7/site-packages/numpy/core/fromnumeric.py", line 1153, in argmax
return _wrapfunc(a, 'argmax', axis=axis, out=out)
File "/home/slothdemon/.local/lib/python3.7/site-packages/numpy/core/fromnumeric.py", line 58, in _wrapfunc
return _wrapit(obj, method, *args, **kwds)
File "/home/slothdemon/.local/lib/python3.7/site-packages/numpy/core/fromnumeric.py", line 47, in _wrapit
result = getattr(asarray(obj), method)(*args, **kwds)
ValueError: attempt to get argmax of an empty sequence
Traceback (most recent call last):
File "/media/inpainting_gmcnn-master/pytorch/test.py", line 32, in
ourModel.load_networks(getLatest(os.path.join(config.load_model_dir, '*.pth')))
File "/media/inpainting_gmcnn-master/pytorch/util/utils.py", line 84, in getLatest
return files[sorted(range(len(file_times)), key=lambda x: file_times[x])[-1]]
IndexError: list index out of range
Hello, I find that the pre-trained model given "places2_512x680_freeform" works well with stroke mask, but the results are poor with rect mask. Can you provide the pre-trained model with rect mask? Thanks.
Hi @shepnerd ,
What do you thing of applying additional batch normalization layers? Have you tried to utilize batch normalization to stabilize learning in generator and discriminators?
Hi , i was testing a random image on 512680 pretrained model but the results seems to be not reflecting what they should be. Do you have any suggestions on how to improve it? Will they improve if i retrain your 512x680 model using 512x512 images? My assumption is that 256x256 images were used in training 512x680 model(might be wrong, kindly correct) , because when i set input tensor to size 256256 given the same image (after resizing of-course) for inpainting then results are good.
Image chosen from internet randomly
Inpainting results
inpainting result when input tensor is set to 256*256, on the same resized image
@shepnerd ,hello. Thanks for your great work and also looking forward to paper ' Semantic Regeneration Network' .
Since gmcnn has a huge amount of parameters, i wonder how much time you spend on training the model on different dataset?Thx!
model setting up..
training initializing..
Traceback (most recent call last):
Traceback (most recent call last):
File "", line 1, in
File "train.py", line 34, in
File "D:\Anaconda3\lib\multiprocessing\spawn.py", line 105, in spawn_main
for i, data in enumerate(dataloader):
exitcode = _main(fd) File "D:\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 278, in iter
File "D:\Anaconda3\lib\multiprocessing\spawn.py", line 114, in _main
return _MultiProcessingDataLoaderIter(self)
File "D:\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 682, in init
prepare(preparation_data)
File "D:\Anaconda3\lib\multiprocessing\spawn.py", line 225, in prepare
w.start()_fixup_main_from_path(data['init_main_from_path'])
File "D:\Anaconda3\lib\multiprocessing\process.py", line 112, in start
File "D:\Anaconda3\lib\multiprocessing\spawn.py", line 277, in _fixup_main_from_path
self._popen = self._Popen(self)
run_name="mp_main") File "D:\Anaconda3\lib\multiprocessing\context.py", line 223, in _Popen
File "D:\Anaconda3\lib\runpy.py", line 263, in run_path
return _default_context.get_context().Process._Popen(process_obj)
File "D:\Anaconda3\lib\multiprocessing\context.py", line 322, in _Popen
pkg_name=pkg_name, script_name=fname)
File "D:\Anaconda3\lib\runpy.py", line 96, in _run_module_code
return Popen(process_obj)
mod_name, mod_spec, pkg_name, script_name) File "D:\Anaconda3\lib\multiprocessing\popen_spawn_win32.py", line 89, in init
File "D:\Anaconda3\lib\runpy.py", line 85, in _run_code
reduction.dump(process_obj, to_child)
exec(code, run_globals) File "D:\Anaconda3\lib\multiprocessing\reduction.py", line 60, in dump
File "D:\PycharmProjects2\inpainting_gmcnn\pytorch\train.py", line 34, in <module>
ForkingPickler(file, protocol).dump(obj)
for i, data in enumerate(dataloader):BrokenPipeError
: File "D:\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 278, in iter
[Errno 32] Broken pipe
return _MultiProcessingDataLoaderIter(self)
File "D:\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 682, in init
w.start()
File "D:\Anaconda3\lib\multiprocessing\process.py", line 112, in start
self._popen = self._Popen(self)
File "D:\Anaconda3\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "D:\Anaconda3\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "D:\Anaconda3\lib\multiprocessing\popen_spawn_win32.py", line 46, in init
prep_data = spawn.get_preparation_data(process_obj._name)
File "D:\Anaconda3\lib\multiprocessing\spawn.py", line 143, in get_preparation_data
_check_not_importing_main()
File "D:\Anaconda3\lib\multiprocessing\spawn.py", line 136, in _check_not_importing_main
is not going to be frozen to produce an executable.''')
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable.
Hello, my question is what does the mask used for testing on places2 , celebaHQ-256 looks like when you do quantitative experiment?
Hi @shepnerd ,
I've analysed your code and I have a question regarding the input to the generator.
While making the input you concatenate the masked image with mask and matrix of ones. I understand pros of using mask as additional input but what is the intuition behind the usage of one more channel filled with 1?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.