ranahanocka / meshcnn Goto Github PK
View Code? Open in Web Editor NEWConvolutional Neural Network for 3D meshes in PyTorch
License: MIT License
Convolutional Neural Network for 3D meshes in PyTorch
License: MIT License
Hi, Rana,
Thank you for your excellent work. I found an error when running
bash ./scripts/shrec/train.sh
Below is the program output:
------------ Options -------------
arch: mconvnet
batch_size: 16
beta1: 0.9
checkpoints_dir: ./checkpoints
continue_train: False
dataroot: datasets/shrec_16
dataset_mode: classification
epoch_count: 1
export_folder:
fc_n: 100
flip_edges: 0.2
gpu_ids: [0]
init_gain: 0.02
init_type: normal
is_train: True
lr: 0.0002
lr_decay_iters: 50
lr_policy: lambda
name: shrec16
ncf: [64, 128, 256, 256]
ninput_edges: 750
niter: 100
niter_decay: 100
no_vis: False
norm: group
num_aug: 20
num_groups: 16
num_threads: 3
phase: train
pool_res: [600, 450, 300, 180]
print_freq: 10
resblocks: 1
run_test_freq: 1
save_epoch_freq: 1
save_latest_freq: 250
scale_verts: False
seed: None
serial_batches: False
slide_verts: 0.2
verbose_plot: False
which_epoch: latest
-------------- End ----------------
loaded mean / std from cache
#training meshes = 480
---------- Network initialized -------------
[Network] Total number of parameters : 1.323 M
-----------------------------------------------
Traceback (most recent call last):
File "train.py", line 23, in <module>
for i, data in enumerate(dataset):
File "/home/maiqi/yalong/project/more-personal/deep-3d/MeshCNN/data/__init__.py", line 33, in __iter__
for i, data in enumerate(self.dataloader):
File "/home/maiqi/yalong/software/anaconda3/envs/meshcnn/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 637, in __next__
return self._process_next_batch(batch)
File "/home/maiqi/yalong/software/anaconda3/envs/meshcnn/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 658, in _process_next_batch
raise batch.exc_type(batch.exc_msg)
ValueError: Traceback (most recent call last):
File "/home/maiqi/yalong/software/anaconda3/envs/meshcnn/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 138, in _worker_loop
samples = collate_fn([dataset[i] for i in batch_indices])
File "/home/maiqi/yalong/software/anaconda3/envs/meshcnn/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 138, in <listcomp>
samples = collate_fn([dataset[i] for i in batch_indices])
File "/home/maiqi/yalong/project/more-personal/deep-3d/MeshCNN/data/classification_data.py", line 27, in __getitem__
mesh = Mesh(file=path, opt=self.opt, hold_history=False, export_folder=self.opt.export_folder)
File "/home/maiqi/yalong/project/more-personal/deep-3d/MeshCNN/models/layers/mesh.py", line 16, in __init__
fill_mesh(self, file, opt)
File "/home/maiqi/yalong/project/more-personal/deep-3d/MeshCNN/models/layers/mesh_prepare.py", line 21, in fill_mesh
mesh2fill.ve = mesh_data['ve']
File "/home/maiqi/yalong/software/anaconda3/envs/meshcnn/lib/python3.6/site-packages/numpy/lib/npyio.py", line 262, in __getitem__
pickle_kwargs=self.pickle_kwargs)
File "/home/maiqi/yalong/software/anaconda3/envs/meshcnn/lib/python3.6/site-packages/numpy/lib/format.py", line 692, in read_array
raise ValueError("Object arrays cannot be loaded when "
ValueError: Object arrays cannot be loaded when allow_pickle=False
Machine OS: ubuntu 16.04
conda environments:
#
# Name Version Build Channel
astroid 2.2.5 py36_0
blas 1.0 mkl
ca-certificates 2019.1.23 0
certifi 2019.3.9 py36_0
cffi 1.12.3 py36h2e261b9_0
cudatoolkit 9.2 0
cudnn 7.3.1 cuda9.2_0
cycler 0.10.0 py36_0
cython 0.29.7 py36he6710b0_0
dbus 1.13.6 h746ee38_0
expat 2.2.6 he6710b0_0
fontconfig 2.13.0 h9420a91_0
freetype 2.9.1 h8a8886c_1
glib 2.56.2 hd408876_0
gst-plugins-base 1.14.0 hbbd80ab_1
gstreamer 1.14.0 hb453b48_1
icu 58.2 h9c2bf20_1
intel-openmp 2019.3 199
isort 4.3.17 py36_0
jpeg 9b h024ee3a_2
kiwisolver 1.1.0 py36he6710b0_0
lazy-object-proxy 1.3.1 py36h14c3975_2
libedit 3.1.20181209 hc058e9b_0
libffi 3.2.1 hd88cf55_4
libgcc-ng 8.2.0 hdf63c60_1
libgfortran-ng 7.3.0 hdf63c60_0
libpng 1.6.37 hbc83047_0
libstdcxx-ng 8.2.0 hdf63c60_1
libtiff 4.0.10 h2733197_2
libuuid 1.0.3 h1bed415_2
libxcb 1.13 h1bed415_1
libxml2 2.9.9 he19cac6_0
matplotlib 3.0.3 py36h5429711_0
mccabe 0.6.1 py36_1
mkl 2019.3 199
mkl_fft 1.0.12 py36ha843d7b_0
mkl_random 1.0.2 py36hd81dba3_0
ncurses 6.1 he6710b0_1
ninja 1.9.0 py36hfd86e86_0
numpy 1.16.3 py36h7e9f1db_0
numpy-base 1.16.3 py36hde5b4d6_0
olefile 0.46 py36_0
openssl 1.1.1b h7b6447c_1
pcre 8.43 he6710b0_0
pillow 6.0.0 py36h34e0f95_0
pip 19.1 py36_0
protobuf 3.7.1 pypi_0 pypi
pycparser 2.19 py36_0
pylint 2.3.1 py36_0
pyparsing 2.4.0 py_0
pyqt 5.9.2 py36h05f1152_2
python 3.6.8 h0371630_0
python-dateutil 2.8.0 py36_0
pytorch 1.0.1 cuda92py36h65efead_0
pytz 2019.1 py_0
qt 5.9.7 h5867ecd_1
readline 7.0 h7b6447c_5
setuptools 41.0.1 py36_0
sip 4.19.8 py36hf484d3e_0
six 1.12.0 py36_0
sqlite 3.28.0 h7b6447c_0
tensorboardx 1.6 pypi_0 pypi
tk 8.6.8 hbc83047_0
torchvision 0.2.2 py_3 pytorch
tornado 6.0.2 py36h7b6447c_0
typed-ast 1.3.4 py36h7b6447c_0
wheel 0.33.1 py36_0
wrapt 1.11.1 py36h7b6447c_0
xz 5.2.4 h14c3975_4
zlib 1.2.11 h7b6447c_3
zstd 1.3.7 h0b5b093_0
Any advice? thanks.
Hi, Rana,
I want to parse the human segmentation dataset, but I cannot find the description of it. Is there any online site or directed document about it? And, How can I understand/parse the .eseg and .seseg label files?
Thanks!
Could you please explain the function "__get_invalids" which is defined in file "mesh_pool.py"? I want to know what will happen after these operations?
"
MeshPool.__redirect_edges(mesh, edge_id, side, update_key_a, update_side_a)
MeshPool.__redirect_edges(mesh, edge_id, side + 1, update_key_b, update_side_b)
MeshPool.__redirect_edges(mesh, update_key_a, MeshPool.__get_other_side(update_side_a), update_key_b, MeshPool.__get_other_side(update_side_b))
MeshPool.__union_groups(mesh, edge_groups, key_a, edge_id)
MeshPool.__union_groups(mesh, edge_groups, key_b, edge_id)
MeshPool.__union_groups(mesh, edge_groups, key_a, update_key_a)
MeshPool.__union_groups(mesh, edge_groups, middle_edge, update_key_a)
MeshPool.__union_groups(mesh, edge_groups, key_b, update_key_b)
MeshPool.__union_groups(mesh, edge_groups, middle_edge, update_key_b)
"
Thank you very much!
Hi,
I was wondering how hard it would be to adapt the code to work with regression (reconstructing input meshes, with e.g. an Autoencoder framework).
Have you tried this before?
I didn't delve too deep before being certain this could be done, but I suppose the main thing to alter would be the optimization error. Any hints on how I could tackle this? Should I start from a "segmentation" type dataset and compute .eseg files as the edge features instead of using a per vertex L2 error?
Thanks a lot
Hi,
I've executed your source code without changing any settings. But, at 200 epoch, I'm getting 97.5% accuracy on shrec16 data set, but you've mentioned 98.6% accuracy in your paper. could you please tell me, is this 200th epoch accuracy or a highest value at any epoch of training model.
Thanks,
Could you share other links?
Hey,
I have installed Pytorch without CUDA (since I don't have a NVIDIA gpu) and it raises an error that requires mending your code.
AttributeError: module 'torch._C' has no attribute '_cuda_setDevice'
It might be worth mentioning in the Readme file.
It fails at:
MeshCNN/options/base_options.py", line 54, in parse torch.cuda.set_device(self.opt.gpu_ids[0])
I have tried changing it locally to work but it seems like it leads to more errors relating that.
The issue is, since I can't use CUDA, which pytorch version should I install? (As I installed "CUDAless" version).
Thanks!
Hi,
Regards,
Kessho
Hi ranahanocka,
Cheers for the creativity. I am sorry if here is not the right place to ask the questions. I am looking at the code:
Thanks & Regards,
zg
Hi Rana.
Does the training data include the augmented data generated for the classification task?I have a pretty small dataset to work on, and augmentated data would be really helpful.
I see that .npz files stored in ./train/cache folder. Are these files ever used?
Thanks.
Hi @ranahanocka,
I am currently trying to import the .obj files with their corresponding segmentation into Blender.
In my first attempt I imported the high resolution Objekt (.Off file) as well as the segmentation labels (.seg) into Blender. After matching the Face labels within the .seg files to their corresponding vertices and grouping them in vertice groups according to their labels, I received a nicely segmented human body.
Now I want to do the same thing with the low resolution object (.obj file) and the edge labels provided in the .eseg file. After matching the edge labels to their corresponding vertices and adding them to different vertex group depending on their label, I am not receiving any good results. The vertices of class head, look like this for example:
Do you have any Idea why it did not work with the low resolution object?
Seems like the order of the Vertex Indexes do not match with the labels anymore.
I want to generate .seg files so that I can use my data to train this network. Could you please suggest any software that is better for this issue?
Can you please explain what self.sides actually signify? It can be found in mesh_prepare.py
Hi,
I want to change how MeshCNN collapses the edges in order to train it to reduce a mesh based on its normals. The smaller the difference between the input Triangle Normal and the Output, the more likely it is that the edge gets collapsed. The overall goal of me is to write a neural network that reduces the polygon count of any input mesh based on a set of criteria.
Where exactly can i find the code that decides whether or not a neuron should collapse an edge or not? i would guess that its in models/layers/mesh_pool.py and there somewhere in the "__build_queue" function. But since the code is not very well documented, i dont quite understand what its doing there.
Any idea where i should try to implement that feature?
Hi Rana,
I really appreciate you for sharing such a nice code and your mesh pooling idea is really great.
I used your segmentation code for my own dataset and I expected to get a vector that represents which edge belongs to which segment or final segmentation plot you provide in the Github.
Besides, what I get is in the checkpoints, human seg I get some obj files that shows how mesh-pool decimate faces (gray plots you provide in the Github which I can see through view.sh) I supposed these plots should be allocated to classification.
It would be so nice if you help me to find my answer.
Best regards,
Fatemeh
I would love to see a new feature added to the blender_process.py script allowing you to take a big mesh with an *.eseg file and then returning a simplified mesh with new corresponding annotations
Traceback (most recent call last):
File "train.py", line 23, in
for i, data in enumerate(dataset):
File "/home/ankit/MeshCNN/data/init.py", line 33, in iter
for i, data in enumerate(self.dataloader):
File "/home/ankit/anaconda3/envs/meshcnn/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 637, in next
return self._process_next_batch(batch)
File "/home/ankit/anaconda3/envs/meshcnn/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 658, in _process_next_batch
raise batch.exc_type(batch.exc_msg)
ValueError: Traceback (most recent call last):
File "/home/ankit/anaconda3/envs/meshcnn/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 138, in _worker_loop
samples = collate_fn([dataset[i] for i in batch_indices])
File "/home/ankit/anaconda3/envs/meshcnn/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 138, in
samples = collate_fn([dataset[i] for i in batch_indices])
File "/home/ankit/MeshCNN/data/classification_data.py", line 31, in getitem
edge_features = pad(edge_features, self.opt.ninput_edges)
File "/home/ankit/MeshCNN/util/util.py", line 22, in pad
return np.pad(input_arr, pad_width=npad, mode='constant', constant_values=val)
File "/home/ankit/anaconda3/envs/meshcnn/lib/python3.6/site-packages/numpy/lib/arraypad.py", line 1200, in pad
pad_width = _validate_lengths(narray, pad_width)
File "/home/ankit/anaconda3/envs/meshcnn/lib/python3.6/site-packages/numpy/lib/arraypad.py", line 985, in _validate_lengths
raise ValueError(fmt % (number_elements,))
ValueError: [(0, 0), (0, -150)] cannot contain negative values.
""
Do i need to change any base options to counter this error ?
""
following this issue: #1
add check for numpy version and throw exception with useful message if version is too new
i was working with your cube dataset to benchmark my dataset later. Each and every mesh had the same number of triangles and vertices which somehow won't be the case for me. Will it be possible to work on training data with varied number of triangles and vertices ?
Running the blender script, I sometimes get the following messages on some files:
Info: Applied modifier was not first, result may not be as expected
Info: Applied modifier was not first, result may not be as expected
Info: Applied modifier was not first, result may not be as expected
Info: Applied modifier was not first, result may not be as expected
Can this be ignored?
@ranahanocka First of all thanks for this interesting approach of CNN.
Unfortunately I am currently running into this error when trying to train the network on my own dataset. Faces were simplified to 600 with the blender script. Do you have an idea what this could possible causing this? Bad .obj files? Wrong parameters used for training configuration?
Console output:
saving the latest model (epoch 1, total_steps 16)
(epoch: 1, iters: 80, time: 0.045, data: 3.654) loss: 0.672
Traceback (most recent call last):
File "train.py", line 23, in <module>
for i, data in enumerate(dataset):
File "/kaggle/working/MeshCNN/data/__init__.py", line 33, in __iter__
for i, data in enumerate(self.dataloader):
File "/opt/conda/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 819, in __next__
return self._process_data(data)
File "/opt/conda/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 846, in _process_data
data.reraise()
File "/opt/conda/lib/python3.6/site-packages/torch/_utils.py", line 385, in reraise
raise self.exc_type(msg)
ValueError: Caught ValueError in DataLoader worker process 2.
Original Traceback (most recent call last):
File "/opt/conda/lib/python3.6/site-packages/torch/utils/data/_utils/worker.py", line 178, in _worker_loop
data = fetcher.fetch(index)
File "/opt/conda/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/opt/conda/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/kaggle/working/MeshCNN/data/classification_data.py", line 27, in __getitem__
mesh = Mesh(file=path, opt=self.opt, hold_history=False, export_folder=self.opt.export_folder)
File "/kaggle/working/MeshCNN/models/layers/mesh.py", line 16, in __init__
fill_mesh(self, file, opt)
File "/kaggle/working/MeshCNN/models/layers/mesh_prepare.py", line 11, in fill_mesh
mesh_data = from_scratch(file, opt)
File "/kaggle/working/MeshCNN/models/layers/mesh_prepare.py", line 61, in from_scratch
post_augmentation(mesh_data, opt)
File "/kaggle/working/MeshCNN/models/layers/mesh_prepare.py", line 185, in post_augmentation
slide_verts(mesh, opt.slide_verts)
File "/kaggle/working/MeshCNN/models/layers/mesh_prepare.py", line 198, in slide_verts
if min(dihedral[edges]) > 2.65:
ValueError: min() arg is an empty sequence
Hi @ranahanocka!
I want to create new ground truth data for the human data set with slightly varying vertices to check whether your trained model will be able to segment those.
I managed to create the ground truth eseg file with the matlab code that you provided.
Unfortunately, the seseg-files are not created and I can't find a different function that I have to run. Is there a different way to create the seseg files?
bash ./scripts/shrec/train.sh
THCudaCheck FAIL file=/opt/conda/conda-bld/pytorch_1549630534704/work/torch/csrc/cuda/Module.cpp line=34 error=35 : CUDA driver version is insufficient for CUDA runtime version
Traceback (most recent call last):
File "train.py", line 9, in
opt = TrainOptions().parse()
File "/home/peichen/python_test/MeshCNN/options/base_options.py", line 54, in parse
torch.cuda.set_device(self.opt.gpu_ids[0])
File "/home/peichen/anaconda3/envs/meshcnn/lib/python3.6/site-packages/torch/cuda/init.py", line 264, in set_device
torch._C._cuda_setDevice(device)
RuntimeError: cuda runtime error (35) : CUDA driver version is insufficient for CUDA runtime version at /opt/conda/conda-bld/pytorch_1549630534704/work/torch/csrc/cuda/Module.cpp:34
Hi, first of all, thanks for sharing this code public!
I've discovered a problem with the README. It says it works with pytorch 1.0, but it won't because it is using torch.BoolTensor
in model/layers/mesh.py
. torch.BoolTensor
is supported since 1.2.0.
I confirmed the test code for scripts/human_seg
is working fine with pytorch 1.2.0. I hope this information helps other guys testing this code.
Hello @ranahanocka, I've noticed that the number of epochs when running the training scripts for segmentation and cubes classification is 2100, while the number of epochs for SHREC classification is 200. I am curious about this big difference, especially since the other parameters of the networks are so similar.
In my experiments, it seems like the testing accuracy is stable way before reaching 2100 epochs, so I was wondering if this is a small bug (--niter_decay is only specified in SHREC classification but not for the other experiments). If it isn't, I would appreciate if you could share your insights about why SHREC does not need that many training iterations compared to the other datasets.
Thanks!
How do we define these parameters for defining the MeshConvNet model?
thank you!
Commit feaa6c1 breaks compatibility with pytorch=1.0.1, which is the one specified in the environment.yml
file.
TypeError: can't convert np.ndarray of type numpy.bool_. The only supported types are: double, float, float16, int64, int32, and uint8.
Updating pytorch to 1.2 version fix the issue.
I am trying to implement MeshCNN for Shapenet models. While It seems that I am able to get past the manifold issue using https://github.com/hjwdzh/Manifold but I am stuck with another problem. I have noticed that the number of pooled edges vary (maybe because feature vectors are different and therefore they appear differently in the queue each time) and often they are larger than the target number of pooled edges. Do you have any idea as to why this might be happening? Is there a way around this problem?
Thanks!
Some of the larger datasets out there (such as Shapenet and ABC) have edge counts ranging from 100's to 100000's.
Do you have any advice on selecting a pool_res
and ncf
for datasets with this sort of edge count variation? And/or any thoughts on the effect this would have on your research?
Hi @ranahanocka
In __getitem__()
meta['edge_features'] = (edge_features - self.mean) / self.std
Which self.mean and self.std are read from 'mean_std_cache.p' file.
How to get such file? How to implement it in my own data without such file?
Thanks
@ranahanocka
Hi,
Once we segment the object, how can we view each segmented component independently ?
I used classification pretrained parameters, the test works well for edges 750, but then I subdivide the mesh to 3000 edges(apply 1 loop, still manifold), and set --pool_res 2500 2000 1500 1000
, error occurs below, do I have to use the fixed edges(like 750) as input? How to deal with dense meshes, do we need to retrain it?
Running Test
loaded mean / std from cache
loading the model from ./checkpoints\mytest\latest_net.pth
Traceback (most recent call last):
File "test.py", line 25, in <module>
run_test()
File "test.py", line 16, in run_test
for i, data in enumerate(dataset):
File "E:\Github\MeshCNN\data\__init__.py", line 33, in __iter__
for i, data in enumerate(self.dataloader):
File "D:\DevTool\anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 582, in __next__
return self._process_next_batch(batch)
File "D:\DevTool\anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 608, in _process_next_batch
raise batch.exc_type(batch.exc_msg)
ValueError: Traceback (most recent call last):
File "D:\DevTool\anaconda3\lib\site-packages\torch\utils\data\_utils\worker.py", line 99, in _worker_loop
samples = collate_fn([dataset[i] for i in batch_indices])
File "D:\DevTool\anaconda3\lib\site-packages\torch\utils\data\_utils\worker.py", line 99, in <listcomp>
samples = collate_fn([dataset[i] for i in batch_indices])
File "E:\Github\MeshCNN\data\classification_data.py", line 31, in __getitem__
edge_features = pad(edge_features, self.opt.ninput_edges)
File "E:\Github\MeshCNN\util\util.py", line 22, in pad
return np.pad(input_arr, pad_width=npad, mode='constant', constant_values=val)
File "D:\DevTool\anaconda3\lib\site-packages\numpy\lib\arraypad.py", line 1172, in pad
pad_width = _as_pairs(pad_width, narray.ndim, as_index=True)
File "D:\DevTool\anaconda3\lib\site-packages\numpy\lib\arraypad.py", line 949, in _as_pairs
raise ValueError("index can't contain negative values")
ValueError: index can't contain negative values
This is more a question rather than an issue, so apologies if it is not the right place to ask it.
Have you also tried the network on the de facto ModelNet50 (classification) and ShapeNet Core (segmentation) datasets?
I attempted to use the pre-trained network on personally generated data and the segmentation results were quite poor. The mesh is decimated down to ~1500 verts and 3000 faces, manifold, and watertight. I attached the OBJ (inside zip) I used.
mesh.zip
When running the blender script to simplify the shapes, it always gives the following message: Error: File format is not supported in file 'C:\BatchAnalysis\MeshNetworkInput\0\test\filename.obj'
However, the .obj file is created and can be opened with FreeCAD for example.
hi,
While running for split 10 classification on SHREC do we need to change the batch_size to 10 or there are some other settings? shrec16 has 16 training samples for each class, do we need shrec10 of 10 samples for each class?
Kind regards,
Rana Kamran
Hi Rana,
first of all, thanks for this contribution. It really looks promising!
I am working with your code in order to analyze a set of meshes formed by molecular shapes. The objects I work with are all genus 0 manifolds (i.e. no holes). I am running your code with segmentation options (although I don't really have any segments defined).
I could run the code with some dummy examples. But, when I try to extend it to a more general set, I run into the following error:
Traceback (most recent call last):
File "train.py", line 36, in <module>
model.optimize_parameters()
File "/Users/ismael.gomez/Documents/Software/MeshCNN/MeshCNN-unsupervised v2/models/mesh_classifier.py", line 72, in optimize_parameters
out = self.forward()
File "/Users/ismael.gomez/Documents/Software/MeshCNN/MeshCNN-unsupervised v2/models/mesh_classifier.py", line 59, in forward
out = self.net(self.edge_features, self.mesh)
File "/Users/ismael.gomez/Documents/Software/MeshCNN/MeshCNN-unsupervised v2/models/networks.py", line 209, in __call__
return self.forward(x, meshes)
File "/Users/ismael.gomez/Documents/Software/MeshCNN/MeshCNN-unsupervised v2/models/networks.py", line 202, in forward
fe, before_pool = self.encoder((x, meshes))
File "/Users/ismael.gomez/Documents/Software/MeshCNN/MeshCNN-unsupervised v2/models/networks.py", line 363, in __call__
return self.forward(x)
File "/Users/ismael.gomez/Documents/Software/MeshCNN/MeshCNN-unsupervised v2/models/networks.py", line 347, in forward
fe, before_pool = conv((fe, meshes))
File "/Users/ismael.gomez/Documents/Software/MeshCNN/MeshCNN-unsupervised v2/models/networks.py", line 228, in __call__
return self.forward(x)
File "/Users/ismael.gomez/Documents/Software/MeshCNN/MeshCNN-unsupervised v2/models/networks.py", line 250, in forward
x2 = self.pool(x2, meshes)
File "/Users/ismael.gomez/Documents/Software/MeshCNN/MeshCNN-unsupervised v2/models/layers/mesh_pool.py", line 21, in __call__
return self.forward(fe, meshes)
File "/Users/ismael.gomez/Documents/Software/MeshCNN/MeshCNN-unsupervised v2/models/layers/mesh_pool.py", line 34, in forward
self.__pool_main(mesh_index)
File "/Users/ismael.gomez/Documents/Software/MeshCNN/MeshCNN-unsupervised v2/models/layers/mesh_pool.py", line 50, in __pool_main
value, edge_id = heappop(queue)
IndexError: index out of range
Adding some marks within your code, I could identify that his happens in cases where there are -1's in the gemm-edges structure, but I don't understand why is this. My surfaces are all (allegedly) closed, so no boundary edges exist.
Do you have any idea of why this is happening?
Thanks in advance,
Ismael.
Hi I think I'm confused about how to test my own test case. run those commands(human segmentation related) works fine for me, but if I want to segment some other obj file, what should I do? I tried to put my obj file in test but that doesn't seem to work, it ask me to have seg files.
I don't need to test the result of the segmentation, just want to segment with trained data.
Thanks a lot and really appreciate your code.
Hi Rana,
My task is to use MeshCNN generate some specific meshes.
I have same questions during using this.During the training time, the samples have the same adjacent-edges matrix? Meshes in my dataset have different , can I use MeshCNN?
hi rana
We at printsyst are using mostly STL files. This is the standard output format of many cad tools and most common format for 3D printers.
why you used (and using) obj file instead?
Reuven
Hi,
The images regarding visualization of 3d objects in your paper are elgent. Can you talk about how did you draw them?
Thanks.
So I'm trying to use MeshCNN on some quite large meshes, around ~10000 faces, ~16000 edges. When running the train script however, I get the following error:
File "C:\MeshCNN\models\layers\mesh_pool.py", line 50, in __pool_main
value, edge_id = heappop(queue)
IndexError: index out of range
It seems that the queue of edges runs out of edges before it has reached the number of edges required by the pool_res argument? Is this correct?
I'm also guessing this is happening because __pool_edge in the same file returns false for too many of the edges in my mesh.
Does anyone have any idea how I might rectify this? Is there a way to make sure that the queue does not run out of edges so that I can continue removing edges, regardless of how many edges I started with?
I want to know that what does sides (which defined in file 'mesh_prepare.py') stand for? Thank you
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.