Coder Social home page Coder Social logo

ennauata / housegan Goto Github PK

View Code? Open in Web Editor NEW
238.0 13.0 65.0 194 KB

House-GAN: Relational Generative Adversarial Networks for Graph-constrained House Layout Generation

Home Page: https://ennauata.github.io/housegan/page.html

License: Other

Python 99.31% Shell 0.69%

housegan's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

housegan's Issues

apply for SFU's dataset

Hello,I want to apply for your datasets to do research,but I haven't found how to apply for it.Can you tell me what and how I should do to apply for SFU's datasets.

IndexError: list index out of range

When I tried to train the model, I encountered the following error:

107315
target samples: defaultdict(<class 'int'>, {3: 4126, 7: 18971, 13: 5454, 9: 16395, 6: 16179, 17: 241, 1: 2138, 8: 17591, 4: 7005, 5: 11547, 2: 2667, 14: 2875, 18: 122, 15: 1248, 16: 531, 20: 47, 24: 19, 19: 82, 23: 13, 22: 25, 21: 20, 34: 1, 32: 2, 26: 4, 25: 5, 30: 1, 29: 1, 31: 1, 27: 2, 35: 1, 28: 1})
5000
target samples: defaultdict(<class 'int'>, {11: 1787, 12: 1260, 10: 1953})
Traceback (most recent call last):
File "main.py", line 198, in
gen_mks = data_parallel(generator, (z, given_nds, given_eds), indices)
File "main.py", line 103, in data_parallel
output_device = device_ids[0]
IndexError: list index out of range

Any ideas?

Question about applying loss function in HouseGAN

Hi there. I have question about applying adversarial loss in HouseGAN. I noticed that under main.py, self.adversarial_loss was defined but never used, while the loss was computed by directly using output from discriminator.

How about training time?

Hi, I would like to know how many n_epochs you used to get the results in your paper? and how long does it take to complete training on two GPUs? Thanks.

As far as I know, you set n_epochs=1000000, while for me, I can train only about 70 epochs on two RTX 6000 GPUs (each one has 24G) for one day, thus it needs about 1000000/70=14285 days to finish the training. That's really unacceptable.

how to run the housegan demo?

image
hello, I am very interested in your house-gan model, but I don't know how to run the house-gan demo just like in the picture.
Do I need to directly run "pytorch_visualize.py" or "visualize.py"? I run them but it doesn't work, the UI never appeared.
I am looking forward to your reply, thank you.

multi Level

Hey there, thanks for great work here. I wanted toask if I want to make it generate floorplans for multilevel houses, how can I do that? Where should I look fo it?

requesting environment info

if there was a requirements.txt, environment.yml, or docker file to show whats needed to run this project, it would be very helpful.

About node features in the dataset

I'm exploring node classification problem on the dataset you've used and would like to know if there are any relevant node features available/accessible for this dataset. I can see that the node property is a label denoting the type of room. Is it possible that I can extract any node features from this dataset that would be suitable for tasks like node classification to classify the type of room? Any pointers would be really appreciated. Thanks in advance.

floorplan_dataset_maps

	if split == 'train':
		self.subgraphs = np.load('{}/train_data.npy'.format(self.shapes_path), allow_pickle=True)
		self.augment = True
	elif split == 'eval':
		self.subgraphs = np.load('{}/train_data.npy'.format(self.shapes_path), allow_pickle=True)
		self.augment = False

When split == 'eval',why the dataset still use the "tran_data",is there anything wrong?

Model not saving after training

Hi everyone,

I am new to GAN and torch so please bear with my limited knowledge here.

I was trying to train on set E and everything seemed to be fine until before the last iteration of the last epoch ran, it exited without saving the model .pth file. The change I made in the Main.py file are:

  1. Directories and Names based on my own machine.
  2. I added if __name__ == '__main__': between line 20 (from models import...) and line 23 (where the parser starts) in order to have it run on WIN10

Btw there is no problem whatsoever running the pre-trained model. I was trying to train a model for more rooms/bigger graph size, like 20+.

Start: PS C:\machinelearning\housegan> python main.py --target_set E --exp_folder exp_example 134626 target samples: defaultdict(<class 'int'>, {3: 4126, 7: 18971, 9: 16395, 6: 16179, 11: 13357, 1: 2138, 12: 9390, 10: 15260, 8: 17591, 4: 7005, 5: 11547, 2: 2667}) 5000 target samples: defaultdict(<class 'int'>, {13: 2561, 17: 108, 14: 1323, 18: 53, 15: 599, 16: 241, 20: 24, 24: 12, 19: 39, 23: 8, 22: 17, 21: 8, 34: 1, 32: 1, 26: 2, 25: 2, 30: 1}) [Epoch 0/1] [Batch 0/2104] [D loss: 9.999939] [G loss: 0.085443] [Epoch 0/1] [Batch 1/2104] [D loss: 9.999902] [G loss: 0.087042] [Epoch 0/1] [Batch 2/2104] [D loss: 9.999855] [G loss: 0.089001]

End: [Epoch 0/1] [Batch 2102/2104] [D loss: -36.034729] [G loss: -988.335083] [Epoch 0/1] [Batch 2103/2104] [D loss: -32.958317] [G loss: -837.985046] PS C:\machinelearning\housegan>

Please advise, thank you!

About experiment speed and contrast experiment

Hello,Could you please tell me the training equipment and adjustment parameters? I run main.py very slowly, I don't know what causes it.
In addition, I would like to ask whether the code for using GCN as a comparison experiment is provided.
Thank you for your reply.

Unable to install pygraphviz (windows)

Hi
First of all, thanks for such an amazing project.
I want to run the pretrained models but when I do It keeps showing this error:

Traceback (most recent call last):
File "C:\Python39\lib\site-packages\networkx\drawing\nx_agraph.py", line 279, in pygraphviz_layout
import pygraphviz
File "C:\Python39\lib\site-packages\pygraphviz_init_.py", line 56, in
from .agraph import AGraph, Node, Edge, Attribute, ItemAttribute, DotError
File "C:\Python39\lib\site-packages\pygraphviz\agraph.py", line 20, in
from . import graphviz as gv
File "C:\Python39\lib\site-packages\pygraphviz\graphviz.py", line 13, in
from . import _graphviz
ImportError: DLL load failed while importing _graphviz: The specified module could not be found.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "C:\Users\z.rostami\PycharmProjects\housegan\variation_bbs_with_target_graph_segments_suppl.py", line 232, in
graph_arr = draw_graph([real_nodes, eds.detach().cpu().numpy()])
File "C:\Users\z.rostami\PycharmProjects\housegan\variation_bbs_with_target_graph_segments_suppl.py", line 70, in draw_graph
pos = nx.nx_agraph.graphviz_layout(G_true, prog='neato')
File "C:\Python39\lib\site-packages\networkx\drawing\nx_agraph.py", line 237, in graphviz_layout
return pygraphviz_layout(G, prog=prog, root=root, args=args)
File "C:\Python39\lib\site-packages\networkx\drawing\nx_agraph.py", line 281, in pygraphviz_layout
raise ImportError("requires pygraphviz " "http://pygraphviz.github.io/") from e
ImportError: requires pygraphviz http://pygraphviz.github.io/

I spend a lot of time setting up Graphviz and pygraphviz successfully and I think I did! But the problem with pygraphviz still exists.
Isn't there any way to setup these libraries on Windows? Should I install Anaconda just for one library? Or install a VMware and clone the project in Linux ?! Has anyone installed pygraphviz in Windows?

Can't have outputs with a different --latent_dim (default=128)

I was trying to figure it out how the model would work with a bigger or smaller latent space... but while running

%run variation_bbs_with_target_graph_segments_suppl.py --batch_size 1 --channels 1 --exp_folder exp --latent_dim 64 --num_variations 4

For --latent_dim 64
For --latent_dim 256

I got similar errors...


RuntimeError Traceback (most recent call last)
~\housegan\variation_bbs_with_target_graph_segments_suppl.py in
212 z = Variable(Tensor(np.random.normal(0, 1, (real_mks.shape[0], opt.latent_dim))))
213 with torch.no_grad():
--> 214 gen_mks = generator(z, given_nds, given_eds)
215 gen_bbs = np.array([np.array(mask_to_bb(mk)) for mk in gen_mks.detach().cpu()])
216 real_bbs = np.array([np.array(mask_to_bb(mk)) for mk in real_mks.detach().cpu()])

~\anaconda3\envs\housegan\lib\site-packages\torch\nn\modules\module.py in call(self, *input, **kwargs)
548 result = self._slow_forward(*input, **kwargs)
549 else:
--> 550 result = self.forward(*input, **kwargs)
551 for hook in self._forward_hooks.values():
552 hook_result = hook(self, input, result)

~\housegan\models.py in forward(self, z, given_y, given_w)
147 if True:
148 y = given_y.view(-1, 10)
--> 149 z = torch.cat([z, y], 1)
150 x = self.l1(z)
151 x = x.view(-1, 16, self.init_size, self.init_size)

RuntimeError: Sizes of tensors must match except in dimension 0. Got 10 and 5

PS: I'm using the pre-trained model

Typo?

  1. why the evaluation stage also uses train_data.npy as shown in this line?

  2. what the difference between train_data.npy and housegan_clean_data.npy? Thanks.

Evaluation

Hi, can you give instructions on how to evaluate the model with the three metrics you used? Thanks.

RuntimeError: The size of tensor a (3) must match the size of tensor b (4) at non-singleton dimension 0

when I run python variation_bbs_with_target_graph_segments_suppl.py , I got error.

$ python variation_bbs_with_target_graph_segments_suppl.py
Namespace(batch_size=1, channels=1, exp_folder='exp', latent_dim=128, n_cpu=4, num_variations=4)
5000
target samples: defaultdict(<class 'int'>, {12: 1239, 11: 1727, 10: 2034})
/usr/local/miniconda3/envs/pythonocc/lib/python3.7/site-packages/pygraphviz/agraph.py:1341: RuntimeWarning: Warning: b is not a known color.

warnings.warn(b"".join(errors).decode(self.encoding), RuntimeWarning)
variation_bbs_with_target_graph_segments_suppl.py:75: UserWarning: This figure includes Axes that are not compatible with tight_layout, so results might be incorrect.
plt.tight_layout()
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
(256, 256, 4)
tensor([[[[1., 1., 1., ..., 1., 1., 1.],
[1., 1., 1., ..., 1., 1., 1.],
[1., 1., 1., ..., 1., 1., 1.],
...,
[1., 1., 1., ..., 1., 1., 1.],
[1., 1., 1., ..., 1., 1., 1.],
[1., 1., 1., ..., 1., 1., 1.]],

     [[1., 1., 1.,  ..., 1., 1., 1.],
      [1., 1., 1.,  ..., 1., 1., 1.],
      [1., 1., 1.,  ..., 1., 1., 1.],
      ...,
      [1., 1., 1.,  ..., 1., 1., 1.],
      [1., 1., 1.,  ..., 1., 1., 1.],
      [1., 1., 1.,  ..., 1., 1., 1.]],

     [[1., 1., 1.,  ..., 1., 1., 1.],
      [1., 1., 1.,  ..., 1., 1., 1.],
      [1., 1., 1.,  ..., 1., 1., 1.],
      ...,
      [1., 1., 1.,  ..., 1., 1., 1.],
      [1., 1., 1.,  ..., 1., 1., 1.],
      [1., 1., 1.,  ..., 1., 1., 1.]],

     [[1., 1., 1.,  ..., 1., 1., 1.],
      [1., 1., 1.,  ..., 1., 1., 1.],
      [1., 1., 1.,  ..., 1., 1., 1.],
      ...,
      [1., 1., 1.,  ..., 1., 1., 1.],
      [1., 1., 1.,  ..., 1., 1., 1.],
      [1., 1., 1.,  ..., 1., 1., 1.]]],


    [[[1., 1., 1.,  ..., 1., 1., 1.],
      [1., 1., 1.,  ..., 1., 1., 1.],
      [1., 1., 1.,  ..., 1., 1., 1.],
      ...,
      [1., 1., 1.,  ..., 1., 1., 1.],
      [1., 1., 1.,  ..., 1., 1., 1.],
      [1., 1., 1.,  ..., 1., 1., 1.]],

     [[1., 1., 1.,  ..., 1., 1., 1.],
      [1., 1., 1.,  ..., 1., 1., 1.],
      [1., 1., 1.,  ..., 1., 1., 1.],
      ...,
      [1., 1., 1.,  ..., 1., 1., 1.],
      [1., 1., 1.,  ..., 1., 1., 1.],
      [1., 1., 1.,  ..., 1., 1., 1.]],

     [[1., 1., 1.,  ..., 1., 1., 1.],
      [1., 1., 1.,  ..., 1., 1., 1.],
      [1., 1., 1.,  ..., 1., 1., 1.],
      ...,
      [1., 1., 1.,  ..., 1., 1., 1.],
      [1., 1., 1.,  ..., 1., 1., 1.],
      [1., 1., 1.,  ..., 1., 1., 1.]],

     [[0., 0., 0.,  ..., 0., 0., 0.],
      [0., 0., 0.,  ..., 0., 0., 0.],
      [0., 0., 0.,  ..., 0., 0., 0.],
      ...,
      [0., 0., 0.,  ..., 0., 0., 0.],
      [0., 0., 0.,  ..., 0., 0., 0.],
      [0., 0., 0.,  ..., 0., 0., 0.]]],


    [[[0., 1., 1.,  ..., 1., 1., 1.],
      [0., 1., 1.,  ..., 1., 1., 1.],
      [0., 1., 1.,  ..., 1., 1., 1.],
      ...,
      [1., 1., 1.,  ..., 1., 1., 1.],
      [1., 1., 1.,  ..., 1., 1., 1.],
      [1., 1., 1.,  ..., 1., 1., 1.]],

     [[0., 1., 1.,  ..., 1., 1., 1.],
      [0., 1., 1.,  ..., 1., 1., 1.],
      [0., 1., 1.,  ..., 1., 1., 1.],
      ...,
      [1., 1., 1.,  ..., 1., 1., 1.],
      [1., 1., 1.,  ..., 1., 1., 1.],
      [1., 1., 1.,  ..., 1., 1., 1.]],

     [[0., 1., 1.,  ..., 1., 1., 1.],
      [0., 1., 1.,  ..., 1., 1., 1.],
      [0., 1., 1.,  ..., 1., 1., 1.],
      ...,
      [1., 1., 1.,  ..., 1., 1., 1.],
      [1., 1., 1.,  ..., 1., 1., 1.],
      [1., 1., 1.,  ..., 1., 1., 1.]],

     [[1., 1., 1.,  ..., 1., 1., 1.],
      [1., 1., 1.,  ..., 1., 1., 1.],
      [1., 1., 1.,  ..., 1., 1., 1.],
      ...,
      [1., 1., 1.,  ..., 1., 1., 1.],
      [1., 1., 1.,  ..., 1., 1., 1.],
      [1., 1., 1.,  ..., 1., 1., 1.]]],


    ...,


    [[[0., 1., 1.,  ..., 1., 1., 1.],
      [0., 1., 1.,  ..., 1., 1., 1.],
      [0., 1., 1.,  ..., 1., 1., 1.],
      ...,
      [1., 1., 1.,  ..., 1., 1., 1.],
      [1., 1., 1.,  ..., 1., 1., 1.],
      [1., 1., 1.,  ..., 1., 1., 1.]],

     [[0., 1., 1.,  ..., 1., 1., 1.],
      [0., 1., 1.,  ..., 1., 1., 1.],
      [0., 1., 1.,  ..., 1., 1., 1.],
      ...,
      [1., 1., 1.,  ..., 1., 1., 1.],
      [1., 1., 1.,  ..., 1., 1., 1.],
      [1., 1., 1.,  ..., 1., 1., 1.]],

     [[0., 1., 1.,  ..., 1., 1., 1.],
      [0., 1., 1.,  ..., 1., 1., 1.],
      [0., 1., 1.,  ..., 1., 1., 1.],
      ...,
      [1., 1., 1.,  ..., 1., 1., 1.],
      [1., 1., 1.,  ..., 1., 1., 1.],
      [1., 1., 1.,  ..., 1., 1., 1.]],

     [[1., 1., 1.,  ..., 1., 1., 1.],
      [1., 1., 1.,  ..., 1., 1., 1.],
      [1., 1., 1.,  ..., 1., 1., 1.],
      ...,
      [1., 1., 1.,  ..., 1., 1., 1.],
      [1., 1., 1.,  ..., 1., 1., 1.],
      [1., 1., 1.,  ..., 1., 1., 1.]]],


    [[[1., 1., 1.,  ..., 1., 1., 1.],
      [1., 1., 1.,  ..., 1., 1., 1.],
      [1., 1., 1.,  ..., 1., 1., 1.],
      ...,
      [1., 1., 1.,  ..., 1., 1., 1.],
      [1., 1., 1.,  ..., 1., 1., 1.],
      [1., 1., 1.,  ..., 1., 1., 1.]],

     [[1., 1., 1.,  ..., 1., 1., 1.],
      [1., 1., 1.,  ..., 1., 1., 1.],
      [1., 1., 1.,  ..., 1., 1., 1.],
      ...,
      [1., 1., 1.,  ..., 1., 1., 1.],
      [1., 1., 1.,  ..., 1., 1., 1.],
      [1., 1., 1.,  ..., 1., 1., 1.]],

     [[1., 1., 1.,  ..., 1., 1., 1.],
      [1., 1., 1.,  ..., 1., 1., 1.],
      [1., 1., 1.,  ..., 1., 1., 1.],
      ...,
      [1., 1., 1.,  ..., 1., 1., 1.],
      [1., 1., 1.,  ..., 1., 1., 1.],
      [1., 1., 1.,  ..., 1., 1., 1.]],

     [[0., 0., 0.,  ..., 0., 0., 0.],
      [0., 0., 0.,  ..., 0., 0., 0.],
      [0., 0., 0.,  ..., 0., 0., 0.],
      ...,
      [0., 0., 0.,  ..., 0., 0., 0.],
      [0., 0., 0.,  ..., 0., 0., 0.],
      [0., 0., 0.,  ..., 0., 0., 0.]]],


    [[[0., 1., 1.,  ..., 1., 1., 1.],
      [0., 1., 1.,  ..., 1., 1., 1.],
      [0., 1., 1.,  ..., 1., 1., 1.],
      ...,
      [1., 1., 1.,  ..., 1., 1., 1.],
      [1., 1., 1.,  ..., 1., 1., 1.],
      [1., 1., 1.,  ..., 1., 1., 1.]],

     [[0., 1., 1.,  ..., 1., 1., 1.],
      [0., 1., 1.,  ..., 1., 1., 1.],
      [0., 1., 1.,  ..., 1., 1., 1.],
      ...,
      [1., 1., 1.,  ..., 1., 1., 1.],
      [1., 1., 1.,  ..., 1., 1., 1.],
      [1., 1., 1.,  ..., 1., 1., 1.]],

     [[0., 1., 1.,  ..., 1., 1., 1.],
      [0., 1., 1.,  ..., 1., 1., 1.],
      [0., 1., 1.,  ..., 1., 1., 1.],
      ...,
      [1., 1., 1.,  ..., 1., 1., 1.],
      [1., 1., 1.,  ..., 1., 1., 1.],
      [1., 1., 1.,  ..., 1., 1., 1.]],

     [[1., 1., 1.,  ..., 1., 1., 1.],
      [1., 1., 1.,  ..., 1., 1., 1.],
      [1., 1., 1.,  ..., 1., 1., 1.],
      ...,
      [1., 1., 1.,  ..., 1., 1., 1.],
      [1., 1., 1.,  ..., 1., 1., 1.],
      [1., 1., 1.,  ..., 1., 1., 1.]]]])

Traceback (most recent call last):
File "variation_bbs_with_target_graph_segments_suppl.py", line 250, in
save_image(final_images, "./output/results_page_{}{}.png".format(target_set, page_count), nrow=2*opt.num_variations+1, padding=2, range=(0, 1), pad_value=0.5, normalize=False)
File "/usr/local/miniconda3/envs/pythonocc/lib/python3.7/site-packages/torchvision/utils.py", line 101, in save_image
normalize=normalize, range=range, scale_each=scale_each)
File "/usr/local/miniconda3/envs/pythonocc/lib/python3.7/site-packages/torchvision/utils.py", line 85, in make_grid
.copy
(tensor[k])
RuntimeError: The size of tensor a (3) must match the size of tensor b (4) at non-singleton dimension 0

About the dataset and the pretrained model

dear author:
Thanks for your amazing work for GAN. But when i run your code in my workplace , i found some problem.
1.i download the dataset from the dropbox, it contained a pretrained "model exp_demo_D_5000000.pth". Is it the pretrained model for the "variation_bbs_with_target_graph_segments_suppl.py"? when i moved it to the checkpoint folder and maked it as the pretrained model,i get the problem like this
Traceback (most recent call last):
File "variation_bbs_with_target_graph_segments_suppl.py", line 225, in
graph_arr = draw_graph([real_nodes, eds.detach().cpu().numpy()])
File "variation_bbs_with_target_graph_segments_suppl.py", line 75, in draw_graph
nx.draw(G_true, pos, node_size=1000, node_color=colors_H, font_size=0, font_weight='bold', edges=edges, edge_color=colors, width=weights)
File "/home/wc/anaconda3/envs/pytorch_cuda10.1/lib/python3.7/site-packages/networkx/drawing/nx_pylab.py", line 123, in draw
draw_networkx(G, pos=pos, ax=ax, **kwds)
File "/home/wc/anaconda3/envs/pytorch_cuda10.1/lib/python3.7/site-packages/networkx/drawing/nx_pylab.py", line 326, in draw_networkx
raise ValueError(f"Received invalid argument(s): {invalid_args}")
ValueError: Received invalid argument(s): edges

2.The dataset from the dropbox contained a folder named dataset_paper with two data_set data named train_data.npy and valid_data.npy.Is it used for the main.py? or i should renamed housegan_clean_data.npy to train_data.npy to used for the main.py?

3.should i run the variation_bbs_with_target_graph_segments_suppl.py before run the main.py? because i run the main.py without run the variation_bbs_with_target_graph_segments_suppl.py, main.py also works.....

look forward to your answer.

dataset problems

Excuse me,I have applied for the LIFULL HOME datasets,but I don't know how to divide it in the training set and the test set.In addition, the dataset URL you gave was failed,I can't open it.Can you tell me how to handle it?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.