Coder Social home page Coder Social logo

isl-org / open3d-pointnet2-semantic3d Goto Github PK

View Code? Open in Web Editor NEW
503.0 29.0 113.0 1.47 MB

Semantic3D segmentation with Open3D and PointNet++

License: Other

Python 65.67% C++ 20.62% Cuda 7.55% CMake 4.07% Shell 2.10%
open3d pointnet pointnet2 point-cloud classification tensorflow

open3d-pointnet2-semantic3d's Introduction

Semantic3D semantic segmentation with Open3D and PointNet++

Intro

Demo project for Semantic3D (semantic-8) segmentation with Open3D and PointNet++. The purpose of this project is to showcase the usage of Open3D in deep learning pipelines and provide a clean baseline implementation for semantic segmentation on Semantic3D dataset. Here's our entry on the semantic-8 test benchmark page.

Open3D is an open-source library that supports rapid development of software that deals with 3D data. The Open3D frontend exposes a set of carefully selected data structures and algorithms in both C++ and Python. The backend is highly optimized and is set up for parallelization. We welcome contributions from the open-source community.

In this project, Open3D was used for

  • Point cloud data loading, writing, and visualization. Open3D provides efficient implementations of various point cloud manipulation methods.
  • Data pre-processing, in particular, voxel-based down-sampling.
  • Point cloud interpolation, in particular, fast nearest neighbor search for label interpolation.
  • And more.

This project is forked from Mathieu Orhan and Guillaume Dekeyser's repo, which, is forked from the original PointNet2. We thank the original authors for sharing their methods.

Usage

1. Download

Download the dataset Semantic3D and extract it by running the following commands:

cd dataset/semantic_raw.

bash download_semantic3d.sh.

Open3D-PointNet2-Semantic3D/dataset/semantic_raw
├── bildstein_station1_xyz_intensity_rgb.labels
├── bildstein_station1_xyz_intensity_rgb.txt
├── bildstein_station3_xyz_intensity_rgb.labels
├── bildstein_station3_xyz_intensity_rgb.txt
├── ...

2. Convert txt to pcd file

Run

python preprocess.py

Open3D is able to read .pcd files much more efficiently.

Open3D-PointNet2-Semantic3D/dataset/semantic_raw
├── bildstein_station1_xyz_intensity_rgb.labels
├── bildstein_station1_xyz_intensity_rgb.pcd (new)
├── bildstein_station1_xyz_intensity_rgb.txt
├── bildstein_station3_xyz_intensity_rgb.labels
├── bildstein_station3_xyz_intensity_rgb.pcd (new)
├── bildstein_station3_xyz_intensity_rgb.txt
├── ...

3. Downsample

Run

python downsample.py

The downsampled dataset will be written to dataset/semantic_downsampled. Points with label 0 (unlabled) are excluded during downsampling.

Open3D-PointNet2-Semantic3D/dataset/semantic_downsampled
├── bildstein_station1_xyz_intensity_rgb.labels
├── bildstein_station1_xyz_intensity_rgb.pcd
├── bildstein_station3_xyz_intensity_rgb.labels
├── bildstein_station3_xyz_intensity_rgb.pcd
├── ...

4. Compile TF Ops

We need to build TF kernels in tf_ops. First, activate the virtualenv and make sure TF can be found with current python. The following line shall run without error.

python -c "import tensorflow as tf"

Then build TF ops. You'll need CUDA and CMake 3.8+.

cd tf_ops
mkdir build
cd build
cmake ..
make

After compilation the following .so files shall be in the build directory.

Open3D-PointNet2-Semantic3D/tf_ops/build
├── libtf_grouping.so
├── libtf_interpolate.so
├── libtf_sampling.so
├── ...

Verify that that the TF kernels are working by running

cd .. # Now we're at Open3D-PointNet2-Semantic3D/tf_ops
python test_tf_ops.py

5. Train

Run

python train.py

By default, the training set will be used for training and the validation set will be used for validation. To train with both training and validation set, use the --train_set=train_full flag. Checkpoints will be output to log/semantic.

6. Predict

Pick a checkpoint and run the predict.py script. The prediction dataset is configured by --set. Since PointNet2 only takes a few thousand points per forward pass, we need to sample from the prediction dataset multiple times to get a good coverage of the points. Each sample contains the few thousand points required by PointNet2. To specify the number of such samples per scene, use the --num_samples flag.

python predict.py --ckpt log/semantic/best_model_epoch_040.ckpt \
                  --set=validation \
                  --num_samples=500

The prediction results will be written to result/sparse.

Open3D-PointNet2-Semantic3D/result/sparse
├── sg27_station4_intensity_rgb.labels
├── sg27_station4_intensity_rgb.pcd
├── sg27_station5_intensity_rgb.labels
├── sg27_station5_intensity_rgb.pcd
├── ...

7. Interpolate

The last step is to interpolate the sparse prediction to the full point cloud. We use Open3D's K-NN hybrid search with specified radius.

python interpolate.py

The prediction results will be written to result/dense.

Open3D-PointNet2-Semantic3D/result/dense
├── sg27_station4_intensity_rgb.labels
├── sg27_station5_intensity_rgb.labels
├── ...

8. Submission

Finally, if you're submitting to Semantic3D benchmark, we've included a handy tools to rename the submission file names.

python renamer.py

Summary of directories

  • dataset/semantic_raw: Raw Semantic3D data, .txt and .labels files. Also contains the .pcd file generated by preprocess.py.
  • dataset/semantic_downsampled: Generated from downsample.py. Downsampled data, contains .pcd and .labels files.
  • result/sparse: Generated from predict.py. Sparse predictions, contains .pcd and .labels files.
  • result/dense: Dense predictions, contains .labels files.
  • result/dense_label_colorized: Dense predictions with points colored by label type.

open3d-pointnet2-semantic3d's People

Contributors

germanros1987 avatar keyserguillaume avatar yxlao avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

open3d-pointnet2-semantic3d's Issues

InvalidArgumentError : No OpKernel was registered to support Op 'FarthestPointSample

I tried to run "train.py"(process 5 train at "usage").
But, I get the following "InvalidArgumentError" error.
Please tell me how to deal with this problem.

[my environment]
ubuntu 16.04
cuda 9.0
cudnn 7.5.0
[anaconda3]
python 3.6
tensorflow 1.12.0
tensorflow-gpu 1.12.0
scikit-learn 0.21.3
open3d-python 0.7.0.0


tensorflow.python.framework.errors_impl.InvalidArgumentError: No OpKernel was registered to support Op 'FarthestPointSample' with these attrs. Registered devices: [CPU,XLA_CPU], Registered kernels:
device='GPU'

 [[{{node layer1/FarthestPointSample}} = FarthestPointSample[npoint=1024, _device="/device:GPU:0"](Slice)]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "train.py", line 469, in
train()
File "train.py", line 410, in train
sess.run(tf.global_variables_initializer())
File "/home/uname/anaconda3/envs/pointNet2_py36/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 929, in run
run_metadata_ptr)
File "/home/uname/anaconda3/envs/pointNet2_py36/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1152, in _run
feed_dict_tensor, options, run_metadata)
File "/home/uname/anaconda3/envs/pointNet2_py36/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1328, in _do_run
run_metadata)
File "/home/uname/anaconda3/envs/pointNet2_py36/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1348, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: No OpKernel was registered to support Op 'FarthestPointSample' with these attrs. Registered devices: [CPU,XLA_CPU], Registered kernels:
device='GPU'

 [[node layer1/FarthestPointSample (defined at <string>:103)  = FarthestPointSample[npoint=1024, _device="/device:GPU:0"](Slice)]]

Caused by op 'layer1/FarthestPointSample', defined at:
File "train.py", line 469, in
train()
File "train.py", line 359, in train
bn_decay=bn_decay,
File "/home/uname/PointNet2/Open3D-PointNet2-Semantic3D-master/model.py", line 47, in get_model
scope="layer1",
File "/home/uname/PointNet2/Open3D-PointNet2-Semantic3D-master/util/pointnet_util.py", line 144, in pointnet_sa_module
npoint, radius, nsample, xyz, points, knn, use_xyz
File "/home/uname/PointNet2/Open3D-PointNet2-Semantic3D-master/util/pointnet_util.py", line 37, in sample_and_group
xyz, farthest_point_sample(npoint, xyz)
File "/home/uname/PointNet2/Open3D-PointNet2-Semantic3D-master/tf_ops/tf_sampling.py", line 69, in farthest_point_sample
return sampling_module.farthest_point_sample(inp, npoint)
File "", line 103, in farthest_point_sample
File "/home/uname/anaconda3/envs/pointNet2_py36/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "/home/uname/anaconda3/envs/pointNet2_py36/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py", line 488, in new_func
return func(*args, **kwargs)
File "/home/uname/anaconda3/envs/pointNet2_py36/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3274, in create_op
op_def=op_def)
File "/home/uname/anaconda3/envs/pointNet2_py36/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1770, in init
self._traceback = tf_stack.extract_stack()

InvalidArgumentError (see above for traceback): No OpKernel was registered to support Op 'FarthestPointSample' with these attrs. Registered devices: [CPU,XLA_CPU], Registered kernels:
device='GPU'

 [[node layer1/FarthestPointSample (defined at <string>:103)  = FarthestPointSample[npoint=1024, _device="/device:GPU:0"](Slice)]]

[Problem] Training step

Hi,

I followed the instruction to run the training:python train.py using the default settings max_epoch=500
At the end of epoch 499, there is error popping up:

max_epoch 500
**** EPOCH 499 ****
2019-01-22 17:23:39.480862
Progress: [##########] 100%mean loss: 0.062824
Overall accuracy : 0.993542
Average IoU : 0.966070
IoU of man-made terrain : 0.978290
IoU of natural terrain : 0.991271
IoU of high vegetation : 0.995123
IoU of low vegetation : 0.932481
IoU of buildings : 0.994296
IoU of hard scape : 0.950104
IoU of scanning artifact : 0.926501
IoU of cars : 0.960493
(tf) william@william-Ubuntu:/media/william/E/Open3D-PointNet2-Semantic3D$ Process ForkPoolWorker-1:1:
Traceback (most recent call last):
  File "/media/william/E/anaconda3/envs/tf/lib/python3.6/multiprocessing/pool.py", line 125, in worker
    put((job, i, result))
  File "/media/william/E/anaconda3/envs/tf/lib/python3.6/multiprocessing/queues.py", line 347, in put
    self._writer.send_bytes(obj)
  File "/media/william/E/anaconda3/envs/tf/lib/python3.6/multiprocessing/connection.py", line 200, in send_bytes
    self._send_bytes(m[offset:offset + size])
  File "/media/william/E/anaconda3/envs/tf/lib/python3.6/multiprocessing/connection.py", line 397, in _send_bytes
    self._send(header)
  File "/media/william/E/anaconda3/envs/tf/lib/python3.6/multiprocessing/connection.py", line 368, in _send
    n = write(self._handle, buf)
BrokenPipeError: [Errno 32] Broken pipe

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/media/william/E/anaconda3/envs/tf/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
    self.run()
  File "/media/william/E/anaconda3/envs/tf/lib/python3.6/multiprocessing/process.py", line 93, in run
    self._target(*self._args, **self._kwargs)
  File "/media/william/E/anaconda3/envs/tf/lib/python3.6/multiprocessing/pool.py", line 130, in worker
    put((job, i, (False, wrapped)))
  File "/media/william/E/anaconda3/envs/tf/lib/python3.6/multiprocessing/queues.py", line 347, in put
    self._writer.send_bytes(obj)
  File "/media/william/E/anaconda3/envs/tf/lib/python3.6/multiprocessing/connection.py", line 200, in send_bytes
    self._send_bytes(m[offset:offset + size])
  File "/media/william/E/anaconda3/envs/tf/lib/python3.6/multiprocessing/connection.py", line 404, in _send_bytes
    self._send(header + buf)
  File "/media/william/E/anaconda3/envs/tf/lib/python3.6/multiprocessing/connection.py", line 368, in _send
    n = write(self._handle, buf)
BrokenPipeError: [Errno 32] Broken pipe
Process ForkPoolWorker-1:5:
Traceback (most recent call last):
  File "/media/william/E/anaconda3/envs/tf/lib/python3.6/multiprocessing/pool.py", line 125, in worker
    put((job, i, result))
  File "/media/william/E/anaconda3/envs/tf/lib/python3.6/multiprocessing/queues.py", line 347, in put
    self._writer.send_bytes(obj)
  File "/media/william/E/anaconda3/envs/tf/lib/python3.6/multiprocessing/connection.py", line 200, in send_bytes
    self._send_bytes(m[offset:offset + size])
  File "/media/william/E/anaconda3/envs/tf/lib/python3.6/multiprocessing/connection.py", line 397, in _send_bytes
    self._send(header)
  File "/media/william/E/anaconda3/envs/tf/lib/python3.6/multiprocessing/connection.py", line 368, in _send
    n = write(self._handle, buf)
BrokenPipeError: [Errno 32] Broken pipe

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/media/william/E/anaconda3/envs/tf/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
    self.run()
  File "/media/william/E/anaconda3/envs/tf/lib/python3.6/multiprocessing/process.py", line 93, in run
    self._target(*self._args, **self._kwargs)
  File "/media/william/E/anaconda3/envs/tf/lib/python3.6/multiprocessing/pool.py", line 130, in worker
    put((job, i, (False, wrapped)))
  File "/media/william/E/anaconda3/envs/tf/lib/python3.6/multiprocessing/queues.py", line 347, in put
    self._writer.send_bytes(obj)
  File "/media/william/E/anaconda3/envs/tf/lib/python3.6/multiprocessing/connection.py", line 200, in send_bytes
    self._send_bytes(m[offset:offset + size])
  File "/media/william/E/anaconda3/envs/tf/lib/python3.6/multiprocessing/connection.py", line 404, in _send_bytes
    self._send(header + buf)
  File "/media/william/E/anaconda3/envs/tf/lib/python3.6/multiprocessing/connection.py", line 368, in _send
    n = write(self._handle, buf)
BrokenPipeError: [Errno 32] Broken pipe

Would anyone please advise on what might go wrong?
Thanks.
William

cmake build error

I am developing in colab and run the following code.

cd tf_ops
mkdir build
cd build
cmake ..
make

However, the following error occurred and no progress was made.

cmake ..


-- The CXX compiler identification is GNU 7.5.0
-- The CUDA compiler identification is NVIDIA 10.1.243
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Check for working CUDA compiler: /usr/local/cuda/bin/nvcc
-- Check for working CUDA compiler: /usr/local/cuda/bin/nvcc -- works
-- Detecting CUDA compiler ABI info
-- Detecting CUDA compiler ABI info - done
-- Configuring done
-- Generating done
-- Build files have been written to: /content/drive/My Drive/Colab Notebooks/Open3D-PointNet2-Semantic3D-master/Open3D-PointNet2-Semantic3D-master/tf_ops/build/open3d_root
Scanning dependencies of target open3d
[-12%] Creating directories for 'open3d'
[  0%] Performing download step (git clone) for 'open3d'
Cloning into 'open3d'...
Checking out files: 100% (1242/1242), done.
error: Your local changes to the following files would be overwritten by checkout:
	3rdparty/GLFW/CMake/GenerateMappings.cmake
	3rdparty/GLFW/CMake/MacOSXBundleInfo.plist.in
	3rdparty/GLFW/CMake/amd64-mingw32msvc.cmake
	3rdparty/GLFW/CMake/i586-mingw32msvc.cmake
	3rdparty/GLFW/CMake/i686-pc-mingw32.cmake
	3rdparty/GLFW/CMake/i686-w64-mingw32.cmake
	3rdparty/GLFW/CMake/modules/FindMir.cmake
	3rdparty/GLFW/CMake/modules/FindOSMesa.cmake
	3rdparty/GLFW/CMake/modules/FindVulkan.cmake
	3rdparty/GLFW/CMake/modules/FindWaylandProtocols.cmake
	3rdparty/GLFW/CMake/modules/FindXKBCommon.cmake
	3rdparty/GLFW/CMake/x86_64-w64-mingw32.cmake
	3rdparty/GLFW/CMakeLists.txt
	3rdparty/GLFW/LICENSE.md
	3rdparty/GLFW/README.md
	3rdparty/GLFW/cmake_uninstall.cmake.in
	3rdparty/GLFW/deps/KHR/khrplatform.h
	3rdparty/GLFW/deps/getopt.c
	3rdparty/GLFW/deps/getopt.h
	3rdparty/GLFW/deps/glad.c
	3rdparty/GLFW/deps/glad/glad.h
	3rdparty/GLFW/deps/linmath.h
	3rdparty/GLFW/deps/mingw/_mingw_dxhelper.h
	3rdparty/GLFW/deps/mingw/dinput.h
	3rdparty/GLFW/deps/mingw/xinput.h
	3rdparty/GLFW/deps/nuklear.h
	3rdparty/GLFW/deps/nuklear_glfw_gl2.h
	3rdparty/GLFW/deps/stb_image_write.h
	3rdparty/GLFW/deps/tinycthread.c
	3rdparty/GLFW/deps/tinycthread.h
	3rdparty/GLFW/deps/vs2008/stdint.h
	3rdparty/GLFW/deps/vulkan/vk_platform.h
	3rdparty/GLFW/deps/vulkan/vulkan.h
	3rdparty/GLFW/include/GLFW/glfw3.h
	3rdparty/GLFW/include/GLFW/glfw3native.h
	3rdparty/GLFW/src/CMakeLists.txt
	3rdparty/GLFW/src/cocoa_init.m
	3rdparty/GLFW/src/cocoa_joystick.h
	3rdparty/GLFW/src/cocoa_joystick.m
	3rdparty/GLFW/src/cocoa_monitor.m
	3rdparty/GLFW/src/cocoa_platform.h
	3rdparty/GLFW/src/cocoa_time.c
	3rdparty/GLFW/src/cocoa_window.m
	3rdparty/GLFW/src/context.c
	3rdparty/GLFW/src/egl_context.c
	3rdparty/GLFW/src/egl_context.h
	3rdparty/GLFW/src/glfw3.pc.in
	3rdparty/GLFW/src/glfw3Config.cmake.in
	3rdparty/GLFW/src/glfw_config.h.in
	3rdparty/GLFW/src/glx_context.c
	3rdparty/GLFW/src/glx_context.h
	3rdparty/GLFW/src/init.c
	3rdparty/GLFW/src/input.c
	3rdparty/GLFW/src/internal.h
	3rdparty/GLFW/src/linux_joystick.c
	3rdparty/GLFW/src/linux_joystick.h
	3rdparty/GLFW/src/mappings.h
	3rdparty/GLFW/src/mappings.h.in
	3rdparty/GLFW/src/mir_init.c
	3rdparty/GLFW/src/mir_monitor.c
	3rdparty/GLFW/src/mir_platform.h
	3rdparty/GLFW/src/mir_window.c
	3rdparty/GLFW/src/monitor.c
	3rdparty/GLFW/src/nsgl_context.h
	3rdparty/GLFW/src/nsgl_context.m
	3rdparty/GLFW/src/null_init.c
	3rdparty/GLFW/src/null_joystick.c
	3rdparty/GLFW/src/null_joystick.h
	3rdparty/GLFW/src/null_monitor.c
	3rdparty/GLFW/src/null_platform.h
	3rdparty/GLFW/src/null_window.c
	3rdparty/GLFW/src/osmesa_context.c
	3rdparty/GLFW/src/osmesa_context.h
	3rdparty/GLFW/src/posix_thread.c
	3rdparty/GLFW/src/posix_thread.h
	3rdparty/GLFW/src/posix_time.c
	3rdparty/GLFW/src/posix_time.h
	3rdparty/GLFW/src/vulkan.c
	3rdparty/GLFW/src/wgl_context.c
	3rdparty/GLFW/src/wgl_context.h
	3rdparty/GLFW/src/win32_init.c
	3rdparty/GLFW/src/win32_joystick.c
	3rdparty/GLFW/src/win32_joystick.h
	3rdparty/GLFW/src/win32_monitor.c
	3rdparty/GLFW/src/win32_platform.h
	3rdparty/GLFW/src/win32_thread.c
	3rdparty/GLFW/src/win32_time.c
	3rdparty/GLFW/src/win32_window.c
	3rdparty/GLFW/src/window.c
	3rdparty/GLFW/src/wl_init.c
	3rdparty/GLFW/src/wl_monitor.c
	3rdparty/GLFW/src/wl_platform.h
	3rdparty/GLFW/src/wl_window.c
	3rdparty/GLFW/src/x11_init.c
	3rdparty/GLFW/src/x11_monitor.c
	3rdparty/GLFW/src/x11_platform.h
	3rdparty/GLFW/src/x11_window.c
	3rdparty/GLFW/src/xkb_unicode.c
	3rdparty/GLFW/src/xkb_unicode.h
	3rdparty/jsoncpp/AUTHORS
	3rdparty/jsoncpp/LICENSE
	3rdparty/jsoncpp/README.md
	3rdparty/jsoncpp/include/json/allocator.h
	3rdparty/jsoncpp/include/json/assertions.h
	3rdparty/jsoncpp/include/json/autolink.h
	3rdparty/jsoncpp/include/json/config.h
	3rdparty/jsoncpp/include/json/features.h
	3rdparty/jsoncpp/include/json/forwards.h
	3rdparty/jsoncpp/include/json/json.h
	3rdparty/jsoncpp/include/json/reader.h
	3rdparty/jsoncpp/include/json/value.h
	3rdparty/jsoncpp/include/json/version.h
	3rdparty/jsoncpp/include/json/writer.h
	3rdparty/jsoncpp/json_reader.cpp
	3rd
Aborting
CMake Error at /content/drive/My Drive/Colab Notebooks/Open3D-PointNet2-Semantic3D-master/Open3D-PointNet2-Semantic3D-master/tf_ops/build/open3d_root/open3d/tmp/open3d-gitclone.cmake:75 (message):
  Failed to checkout tag: '33e46f7'


CMakeFiles/open3d.dir/build.make:89: recipe for target 'open3d/src/open3d-stamp/open3d-download' failed
make[2]: *** [open3d/src/open3d-stamp/open3d-download] Error 1
CMakeFiles/Makefile2:72: recipe for target 'CMakeFiles/open3d.dir/all' failed
make[1]: *** [CMakeFiles/open3d.dir/all] Error 2
Makefile:83: recipe for target 'all' failed
make: *** [all] Error 2
-- Could NOT find Open3D (missing: Open3D_DIR)
CMake Error at open3d.cmake:41 (message):
  Open3D build was not successful
Call Stack (most recent call first):
  open3d.cmake:45 (build_open3d)
  CMakeLists.txt:10 (include)


-- Configuring incomplete, errors occurred!
See also "/content/drive/My Drive/Colab Notebooks/Open3D-PointNet2-Semantic3D-master/Open3D-PointNet2-Semantic3D-master/tf_ops/build/CMakeFiles/CMakeOutput.log".

Do you have any solution for the next error?
I would really appreciate it if you let me know.

interpolate.py rise Segmentation fault (core dumped)

after finishing predicting on test data set interpolate.py has raised this error has any one encountered the same problem I am working on a data similar to semantic3d by the way buth with 25 classes instead of 9

Cmake :Open3D build was not successful!

I can not cmake the Cmakelists.txt correctly.

-- The C compiler identification is unknown
CMake Error at CMakeLists.txt:22 (project):
No CMAKE_C_COMPILER could be found.

Tell CMake where to find the compiler by setting either the environment
variable "CC" or the CMake cache entry CMAKE_C_COMPILER to the full path to
the compiler, or to the compiler name if it is in the PATH.

-- Configuring incomplete, errors occurred!
See also "/home/chenlin/桌面/Open3D-PointNet2-Semantic3D-master/tf_ops/build/open3d_root/open3d/src/open3d-build/CMakeFiles/CMakeOutput.log".
See also "/home/chenlin/桌面/Open3D-PointNet2-Semantic3D-master/tf_ops/build/open3d_root/open3d/src/open3d-build/CMakeFiles/CMakeError.log".
CMakeFiles/open3d.dir/build.make:106: recipe for target 'open3d/src/open3d-stamp/open3d-configure' failed
make[2]: *** [open3d/src/open3d-stamp/open3d-configure] Error 1
CMakeFiles/Makefile2:67: recipe for target 'CMakeFiles/open3d.dir/all' failed
make[1]: *** [CMakeFiles/open3d.dir/all] Error 2
Makefile:83: recipe for target 'all' failed
make: *** [all] Error 2
-- Could NOT find Open3D (missing: Open3D_DIR)
CMake Error at open3d.cmake:41 (message):
Open3D build was not successful
Call Stack (most recent call first):
open3d.cmake:45 (build_open3d)
CMakeLists.txt:10 (include)

-- Configuring incomplete, errors occurred!

Thanks for your reading , I would appreciate it if you have the solution.

tensorflow.python.framework.errors_impl.NotFoundError

Hi, Thanks for the great work! I use TF1.12 ,cuda9.0 and cudnn7.4.2. And I also got libtf_grouping.so libtf_interpolate.so libtf_sampling.so. But I met the error when I run test_tf_ops.py.

Traceback(most recent call last): File "test_tf_ops.py", line 4, in <module> from tf_grouping import query_ball_point, group_point, knn_point File "/media/orange/work/Open3D-PointNet2-Semantic3D-master/tf_ops/tf_grouping.py", line 9, in <module> os.path.join(BASE_DIR, "build", "libtf_grouping.so") File "/home/orange/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/load_library.py", line 60, in load_op_library lib_handle = py_tf.TF_LoadLibrary(library_filename) tensorflow.python.framework.errors_impl.NotFoundError: /media/orange/work/Open3D-PointNet2-Semantic3D-master/tf_ops/build/libtf_grouping.so: undefined symbol: _ZN10tensorflow8internal21CheckOpMessageBuilder9NewStringEv

[Problem] training step

Hi,

I followed the instruction to run the training:python train.py using the default settings max_epoch=500
At the beginning of the training step, there is error popping up:

2019-02-02 15:50:29.682610: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX
2019-02-02 15:50:29.907778: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1105] Found device 0 with properties:
name: GeForce GTX 1080 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.645
pciBusID: 0000:03:00.0
totalMemory: 10.91GiB freeMemory: 10.37GiB
2019-02-02 15:50:29.907851: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1195] Creating TensorFlow device (/device:GPU:0) -> (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:03:00.0, compute capability: 6.1)
--- Get model and loss
Process Process-1:
Traceback (most recent call last):
File "/usr/lib/python2.7/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/usr/lib/python2.7/multiprocessing/process.py", line 114, in run
self._target(*self._args, **self._kwargs)
File "train.py", line 164, in fill_queues
stack_train.put(p.get())
File "/usr/lib/python2.7/multiprocessing/pool.py", line 567, in get
raise self._value
ValueError: probabilities do not sum to 1
--- Get training operator
2019-02-02 15:50:37.119749: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1195] Creating TensorFlow device (/device:GPU:0) -> (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:03:00.0, compute capability: 6.1)
('in epoch', 0)
('max_epoch', 500)
**** EPOCH 000 ****
2019-02-02 15:50:41.417430
Progress: [----------] 0.0%

Also, the progress percentage doesn't increase during a day and it keeps 0.0%.

Would anyone please advise on what the error above means and what should I do?
Thanks.
Jaehyun

after installing the open3d in conda, it doesn't work in the code

install the open3d following the official instruction with ‘conda install -c open3d-admin open3d’, however i meet some errors when run the code like 'module 'open3d' has no attribute 'read_point_cloud''.
other ways to install it has been tried and it still doesn't work.

my python version is 3.6.5

Vulkan directory and The RandR headers Error

It couldn't find Vulkan(VULKAN_LIBRARY VULKAN_INCLUDE_DIR) and RandR headers were not found. I also tried to this solution #25.

OS: Ubuntu 18.04
CUDA: 10.0
Tensorflow: 1.13.1

imp@imp-HP-Z4-G4-Workstation:~/shin/Open3D-PointNet2-Semantic3D-master/tf_ops/build$ cmake -DCMAKE_C_COMPILER=/usr/bin/gcc ..
-- The CXX compiler identification is GNU 7.4.0
-- The CUDA compiler identification is NVIDIA 10.0.130
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Check for working CUDA compiler: /usr/local/cuda-10.0/bin/nvcc
-- Check for working CUDA compiler: /usr/local/cuda-10.0/bin/nvcc -- works
-- Detecting CUDA compiler ABI info
-- Detecting CUDA compiler ABI info - done
-- Configuring done
-- Generating done
-- Build files have been written to: /home/imp/shin/Open3D-PointNet2-Semantic3D-master/tf_ops/build/open3d_root
Scanning dependencies of target open3d
[ 12%] Creating directories for 'open3d'
[ 25%] Performing download step (git clone) for 'open3d'
Cloning into 'open3d'...
Note: checking out '33e46f7'.

You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by performing another checkout.

If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -b with the checkout command again. Example:

  git checkout -b <new-branch-name>

HEAD is now at 33e46f7d Merge pull request #729 from XuChengHUST/master
Submodule '3rdparty' (https://github.com/IntelVCL/Open3D-3rdparty) registered for path '3rdparty'
Cloning into '/home/imp/shin/Open3D-PointNet2-Semantic3D-master/tf_ops/build/open3d_root/open3d/src/open3d/3rdparty'...
Submodule path '3rdparty': checked out 'b08bff7856398f3bbd66e2b10a6f32e943e4ae34'
[ 37%] No patch step for 'open3d'
[ 50%] No update step for 'open3d'
[ 62%] Performing configure step for 'open3d'
-- The C compiler identification is GNU 7.4.0
-- The CXX compiler identification is GNU 7.4.0
-- Check for working C compiler: /usr/bin/gcc
-- Check for working C compiler: /usr/bin/gcc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Open3D 0.4.0.0
-- Compiling on Unix
-- Disable RealSense since it is not fully supported on Linux.
-- Using installed OpenMP 
-- Building EIGEN3 from source (BUILD_EIGEN3=ON)
-- Building GLEW from source (BUILD_GLEW=ON)
-- Building GLFW from source (BUILD_GLFW=ON)
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Looking for pthread_create
-- Looking for pthread_create - found
-- Found Threads: TRUE  
-- Could NOT find Vulkan (missing: VULKAN_LIBRARY VULKAN_INCLUDE_DIR) 
-- Using X11 for window creation
-- Looking for XOpenDisplay in /usr/lib/x86_64-linux-gnu/libX11.so;/usr/lib/x86_64-linux-gnu/libXext.so
-- Looking for XOpenDisplay in /usr/lib/x86_64-linux-gnu/libX11.so;/usr/lib/x86_64-linux-gnu/libXext.so - found
-- Looking for gethostbyname
-- Looking for gethostbyname - found
-- Looking for connect
-- Looking for connect - found
-- Looking for remove
-- Looking for remove - found
-- Looking for shmat
-- Looking for shmat - found
-- Looking for IceConnectionNumber in ICE
-- Looking for IceConnectionNumber in ICE - found
-- Found X11: /usr/lib/x86_64-linux-gnu/libX11.so
CMake Error at 3rdparty/GLFW/CMakeLists.txt:240 (message):
  The RandR headers were not found


-- Configuring incomplete, errors occurred!
See also "/home/imp/shin/Open3D-PointNet2-Semantic3D-master/tf_ops/build/open3d_root/open3d/src/open3d-build/CMakeFiles/CMakeOutput.log".
CMakeFiles/open3d.dir/build.make:106: recipe for target 'open3d/src/open3d-stamp/open3d-configure' failed
make[2]: *** [open3d/src/open3d-stamp/open3d-configure] Error 1
CMakeFiles/Makefile2:67: recipe for target 'CMakeFiles/open3d.dir/all' failed
make[1]: *** [CMakeFiles/open3d.dir/all] Error 2
Makefile:83: recipe for target 'all' failed
make: *** [all] Error 2
-- Could NOT find Open3D (missing: Open3D_DIR)
CMake Error at open3d.cmake:41 (message):
  Open3D build was not successful
Call Stack (most recent call first):
  open3d.cmake:45 (build_open3d)
  CMakeLists.txt:10 (include)


-- Configuring incomplete, errors occurred!
See also "/home/imp/shin/Open3D-PointNet2-Semantic3D-master/tf_ops/build/CMakeFiles/CMakeOutput.log".

About the version of open3d

@yxlao ,When I run downsample.py, I get the error "module 'open3d.open3d' has no attribute 'voxel_down_sample_and_trace'". I went to view the open3d library. I didn't find the attribute, but I saw "voxel_down_sample", but this one has another Problem, the contents that needs to be filled in are different. So I guess the version of open3d we use is different. The version I use is 0.4. What about you? I need your help. Thank you!

problem when classes are not presented in the val.set

I have trained my own data that is similar to semantic3d with 24 class instead of 9 . When training it appears that some of the IOU perclass having values above 0.5 the problem is that this does not make sense because they simply do not exist in the validation set only in the training set. I want to know how the IOU per class is calculated in the metri.py file because as I know it suppose to be NaN value or 1 with a smoothing factor .
please help since I am new to coding and I am really struggling understanding how the IOU per class was calculated .

Thank you very much.

Invalid Argument error

InvalidArgumentError (see above for traceback): Restoring from checkpoint failed. This is most likely due to a mismatch between the current graph and the graph from the checkpoint. Please ensure that you have not altered the graph expected based on the checkpoint. Original error:

Assign requires shapes of both tensors to match. lhs shape= [1,1,128,128] rhs shape= [1,1,131,128]
[[node save/Assign_41 (defined at kitti_predict.py:53) = Assign[T=DT_FLOAT, _class=["loc:@fa_layer4/conv_0/weights"], use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](fa_layer4/conv_0/weights, save/RestoreV2:41)]]
[[{{node save/RestoreV2/_145}} = _Recvclient_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device_incarnation=1, tensor_name="edge_227_save/RestoreV2", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:GPU:0"]]

what is the ubuntu version?

My machine is an ubuntu 18 with cuda 10.

TF is good python -c "import tensorflow as tf"

However, I got following error

[100%] Built target open3d
-- Found Open3D at /workspace/github/tutorial-pcl/Advanced/Open3D-PointNet2-Semantic3D/tf_ops/butild/open3d_root/open3d_install/lib
-- Open3D installed to: /workspace/github/tutorial-pcl/Advanced/Open3D-PointNet2-Semantic3D/tf_ops/butild/open3d_root/open3d_install/lib
-- Looking for TensorFlow installation
CMake Error at FindTensorFlow.cmake:41 (message):
  Cannot determine TensorFlow installation directory Traceback (most recent
  call last):

    File "<string>", line 1, in <module>

  ImportError: No module named tensorflow
Call Stack (most recent call first):
  CMakeLists.txt:25 (find_package)


-- Configuring incomplete, errors occurred!

base on the googling, I think the tensorflow and cuda makes the problems.

what version of OS and cuda do you use?

should I use cuda 9.0 and ubuntu 16?

How to do online testing for Semantic3D?

How to do online testing for Semantic3D?

Can you run these commands for online testing?
python predict.py --ckpt log/semantic/best_model_epoch_040.ckpt
--set=validation
--num_samples=500

test dataset no work

here is my command
python3 predict.py --ckpt log/semantic/best_model_epoch_405.ckpt --set=test --num_samples=500

here is the error
`Dataset split: test
Loading file_prefixes: ['MarketplaceFeldkirch_Station4_rgb_intensity-reduced']
pl_points shape Tensor("Shape:0", shape=(3,), dtype=int32, device=/device:GPU:0)

WARNING: The TensorFlow contrib module will not be included in TensorFlow 2.0.
For more information, please see:

WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
WARNING:tensorflow:From /home/lms/pointNet2/Open3D-PointNet2-Semantic3D/util/tf_util.py:662: calling dropout (from tensorflow.python.ops.nn_ops) with keep_prob is deprecated and will be removed in a future version.
Instructions for updating:
Please use rate instead of keep_prob. Rate should be set to rate = 1 - keep_prob.
2019-11-06 10:44:50.087783: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2019-11-06 10:44:50.384107: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:998] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2019-11-06 10:44:50.384597: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x657ef10 executing computations on platform CUDA. Devices:
2019-11-06 10:44:50.384613: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): GeForce GTX 1050, Compute Capability 6.1
2019-11-06 10:44:50.403814: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2808000000 Hz
2019-11-06 10:44:50.404274: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x65e7580 executing computations on platform Host. Devices:
2019-11-06 10:44:50.404346: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): ,
2019-11-06 10:44:50.404616: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1433] Found device 0 with properties:
name: GeForce GTX 1050 major: 6 minor: 1 memoryClockRate(GHz): 1.455
pciBusID: 0000:01:00.0
totalMemory: 1.95GiB freeMemory: 1.90GiB
2019-11-06 10:44:50.404698: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0
2019-11-06 10:44:50.406574: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-11-06 10:44:50.406587: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 0
2019-11-06 10:44:50.406593: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0: N
2019-11-06 10:44:50.406662: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 1724 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1050, pci bus id: 0000:01:00.0, compute capability: 6.1)
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/training/saver.py:1266: checkpoint_exists (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version.
Instructions for updating:
Use standard file APIs to check for files with this prefix.
Model restored
Processing <dataset.semantic_dataset.SemanticFileData object at 0x7f485ff25400>
2019-11-06 10:44:53.848523: W tensorflow/core/common_runtime/bfc_allocator.cc:211] Allocator (GPU_0_bfc) ran out of memory trying to allocate 3.17GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2019-11-06 10:44:53.875321: W tensorflow/core/common_runtime/bfc_allocator.cc:211] Allocator (GPU_0_bfc) ran out of memory trying to allocate 3.02GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2019-11-06 10:44:54.036662: W tensorflow/core/common_runtime/bfc_allocator.cc:211] Allocator (GPU_0_bfc) ran out of memory trying to allocate 1.16GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2019-11-06 10:44:54.108900: W tensorflow/core/common_runtime/bfc_allocator.cc:211] Allocator (GPU_0_bfc) ran out of memory trying to allocate 1.13GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2019-11-06 10:44:54.127858: W tensorflow/core/common_runtime/bfc_allocator.cc:211] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.32GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
Batch size: 32, time: 1.9697668552398682
Batch size: 32, time: 0.5003781318664551
Batch size: 32, time: 0.49585819244384766
Batch size: 32, time: 0.5022487640380859
Batch size: 32, time: 0.4994063377380371
Batch size: 32, time: 0.4927208423614502
Batch size: 32, time: 0.49471569061279297
Batch size: 32, time: 0.498868465423584
Batch size: 32, time: 0.49785780906677246
Batch size: 32, time: 0.4957921504974365
Batch size: 32, time: 0.49452805519104004
Batch size: 32, time: 0.49374890327453613
Batch size: 32, time: 0.49533581733703613
Batch size: 32, time: 0.49709129333496094
Batch size: 32, time: 0.4983334541320801
2019-11-06 10:45:31.690487: W tensorflow/core/common_runtime/bfc_allocator.cc:211] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.77GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2019-11-06 10:45:31.705401: W tensorflow/core/common_runtime/bfc_allocator.cc:211] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.65GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2019-11-06 10:45:31.814868: W tensorflow/core/common_runtime/bfc_allocator.cc:211] Allocator (GPU_0_bfc) ran out of memory trying to allocate 1.11GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2019-11-06 10:45:31.874898: W tensorflow/core/common_runtime/bfc_allocator.cc:211] Allocator (GPU_0_bfc) ran out of memory trying to allocate 1.10GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2019-11-06 10:45:31.888066: W tensorflow/core/common_runtime/bfc_allocator.cc:211] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.22GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
Batch size: 20, time: 0.6125662326812744
Exported sparse pcd to result/sparse/MarketplaceFeldkirch_Station4_rgb_intensity-reduced.pcd
Exported sparse labels to result/sparse/MarketplaceFeldkirch_Station4_rgb_intensity-reduced.labels
Confusion matrix:
0 1 2 3 4 5 6 7 8
0 0 730814 821 29381 125951 3018863 114555 68655 6960
1 0 0 0 0 0 0 0 0 0
2 0 0 0 0 0 0 0 0 0
3 0 0 0 0 0 0 0 0 0
4 0 0 0 0 0 0 0 0 0
5 0 0 0 0 0 0 0 0 0
6 0 0 0 0 0 0 0 0 0
7 0 0 0 0 0 0 0 0 0
8 0 0 0 0 0 0 0 0 0
IoU per class:
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
mIoU (ignoring label 0):
0.0
Overall accuracy
/home/lms/pointNet2/Open3D-PointNet2-Semantic3D/util/metric.py:83: RuntimeWarning: invalid value encountered in long_scalars
return np.trace(valid_confusion_matrix) / np.sum(valid_confusion_matrix)
nan`

'tensorflow' has no attribute '__cxx11_abi_flag__'

when i compile tf_ops,An error occurred 'tensorflow' has no attribute 'cxx11_abi_flag'
in "FindTensorFlow.cmake" has a code :
python -c "import tensorflow as tf; print(tf.cxx11_abi_flag)"
I think I has install tensorflow 1.2 right,How can I solve this problem?
thanks

Should the training set and test set unify the center of mass?

Thank you for releasing this proposal.I want to use this code to run my own data set, and I find that the center of each scene in the semantic3D dataset and scannet dataset is very close, but the center of mass of each scene in my own dataset is quite different, may I ask whether this will affect the training results?

Compile TF Ops errors

Hello. When I Compile TF Ops some errors occurred. How to solve it? This is the errors below.

-- The CXX compiler identification is GNU 7.3.0
-- The CUDA compiler identification is NVIDIA 9.2.148
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Check for working CUDA compiler: /usr/local/cuda/bin/nvcc
-- Check for working CUDA compiler: /usr/local/cuda/bin/nvcc -- works
-- Detecting CUDA compiler ABI info
-- Detecting CUDA compiler ABI info - done
-- Configuring done
-- Generating done
-- Build files have been written to: /content/Open3D-PointNet2-Semantic3D/tf_ops/build/open3d_root
Scanning dependencies of target open3d
[ 12%] Creating directories for 'open3d'
[ 25%] Performing download step (git clone) for 'open3d'
Cloning into 'open3d'...
Note: checking out '33e46f7'.

You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by performing another checkout.

If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -b with the checkout command again. Example:

  git checkout -b <new-branch-name>

HEAD is now at 33e46f7d Merge pull request #729 from XuChengHUST/master
Submodule '3rdparty' (https://github.com/IntelVCL/Open3D-3rdparty) registered for path '3rdparty'
Cloning into '/content/Open3D-PointNet2-Semantic3D/tf_ops/build/open3d_root/open3d/src/open3d/3rdparty'...
Submodule path '3rdparty': checked out 'b08bff7856398f3bbd66e2b10a6f32e943e4ae34'
[ 37%] No update step for 'open3d'
[ 50%] No patch step for 'open3d'
[ 62%] Performing configure step for 'open3d'
-- The C compiler identification is unknown
-- The CXX compiler identification is GNU 7.3.0
CMake Error at CMakeLists.txt:22 (project):
  No CMAKE_C_COMPILER could be found.

  Tell CMake where to find the compiler by setting either the environment
  variable "CC" or the CMake cache entry CMAKE_C_COMPILER to the full path to
  the compiler, or to the compiler name if it is in the PATH.


-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Configuring incomplete, errors occurred!
See also "/content/Open3D-PointNet2-Semantic3D/tf_ops/build/open3d_root/open3d/src/open3d-build/CMakeFiles/CMakeOutput.log".
See also "/content/Open3D-PointNet2-Semantic3D/tf_ops/build/open3d_root/open3d/src/open3d-build/CMakeFiles/CMakeError.log".
CMakeFiles/open3d.dir/build.make:106: recipe for target 'open3d/src/open3d-stamp/open3d-configure' failed
make[2]: *** [open3d/src/open3d-stamp/open3d-configure] Error 1
CMakeFiles/Makefile2:72: recipe for target 'CMakeFiles/open3d.dir/all' failed
make[1]: *** [CMakeFiles/open3d.dir/all] Error 2
Makefile:83: recipe for target 'all' failed
make: *** [all] Error 2
-- Could NOT find Open3D (missing: Open3D_DIR)
CMake Error at open3d.cmake:41 (message):
  Open3D build was not successful
Call Stack (most recent call first):
  open3d.cmake:45 (build_open3d)
  CMakeLists.txt:10 (include)


-- Configuring incomplete, errors occurred!
See also "/content/Open3D-PointNet2-Semantic3D/tf_ops/build/CMakeFiles/CMakeOutput.log".

Prediction 'invalid value encountered in long_scalars' error

I tried predicting the MarketplaceFeldkirch_Station4_rgb_intensity-reduced dataset and ended up with the following issue.

RuntimeWarning: invalid value encountered in long_scalars
return np.trace(valid_confusion_matrix) / np.sum(valid_confusion_matrix)

Would anyone please advise on how to fix this error ?
This is the command I am using -

python predict.py --ckpt log/semantic/best_model_epoch_040.ckpt --set=test --num_samples=500

NOTE - If I switch to --set=validation, it works but doesn't work with test set.

Thanks
Nandhiny

Could not find TensorFlow

Hello, I am having issues with building tf_ops. I tested the tensorflow within virtualenv as the

python -c "import tensorflow as tf"

showed no errors. But while building i get error message:

-- The CXX compiler identification is MSVC 19.16.27027.1
-- The CUDA compiler identification is NVIDIA 10.1.105
-- Check for working CXX compiler: C:/Program Files (x86)/Microsoft Visual Studio/2017/Community/VC/Tools/MSVC/14.16.27023/bin/Hostx86/x64/cl.exe
-- Check for working CXX compiler: C:/Program Files (x86)/Microsoft Visual Studio/2017/Community/VC/Tools/MSVC/14.16.27023/bin/Hostx86/x64/cl.exe -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Check for working CUDA compiler: C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v10.1/bin/nvcc.exe
-- Check for working CUDA compiler: C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v10.1/bin/nvcc.exe -- works
-- Detecting CUDA compiler ABI info
-- Detecting CUDA compiler ABI info - done
CMake Error at C:/Users/lenovo/AppData/Local/Programs/Python/Python36/Scripts/activate/Lib/site-packages/cmake/data/share/cmake-3.13/Modules/ExternalProject.cmake:2329 (message):
  error: could not find git for clone of open3d
Call Stack (most recent call first):
  C:/Users/lenovo/AppData/Local/Programs/Python/Python36/Scripts/activate/Lib/site-packages/cmake/data/share/cmake-3.13/Modules/ExternalProject.cmake:3105 (_ep_add_download_command)
  CMakeLists.txt:6 (ExternalProject_Add)


-- Configuring incomplete, errors occurred!
See also "C:/Users/lenovo/Documents/Fax/4/Lasersko skeniranje/Projekat/Open3D-PointNet2-Semantic3D/tf_ops/build/open3d_root/CMakeFiles/CMakeOutput.log".
Microsoft (R) Build Engine version 15.9.21+g9802d43bc3 for .NET Framework
Copyright (C) Microsoft Corporation. All rights reserved.

MSBUILD : error MSB1001: Unknown switch.
Switch: -j

For switch syntax, type "MSBuild /help"
-- Found Open3D at C:/Program Files (x86)/Open3D/lib
-- Open3D installed to: C:/Program Files (x86)/Open3D/lib
-- Looking for TensorFlow installation
-- TensorFlow_INCLUDE_DIR: C:\Users\lenovo\AppData\Local\Programs\Python\Python36\Scripts\activate\lib\site-packages\tensorflow\include
-- TensorFlow_DIR: C:\Users\lenovo\AppData\Local\Programs\Python\Python36\Scripts\activate\lib\site-packages\tensorflow
-- TensorFlow_CXX_ABI: 0
-- TensorFlow_GIT_VERSION: b'unknown'
-- TensorFlow_VERSION: 1.13.1
CMake Error at C:/Users/lenovo/AppData/Local/Programs/Python/Python36/Scripts/activate/Lib/site-packages/cmake/data/share/cmake-3.13/Modules/FindPackageHandleStandardArgs.cmake:137 (message):
  Could NOT find TensorFlow (missing: TensorFlow_FRAMEWORK_LIBRARY)
Call Stack (most recent call first):
  C:/Users/lenovo/AppData/Local/Programs/Python/Python36/Scripts/activate/Lib/site-packages/cmake/data/share/cmake-3.13/Modules/FindPackageHandleStandardArgs.cmake:378 (_FPHSA_FAILURE_MESSAGE)
  FindTensorFlow.cmake:112 (find_package_handle_standard_args)
  CMakeLists.txt:25 (find_package)


-- Configuring incomplete, errors occurred!
See also "C:/Users/lenovo/Documents/Fax/4/Lasersko skeniranje/Projekat/Open3D-PointNet2-Semantic3D/tf_ops/build/CMakeFiles/CMakeOutput.log".`

I have tried building with the CmakeGUI 3.14 but I still get the same error. Also tried updating and installing (ignore-installed) tensorflow again, which did not help.

What am I doing wrong? Thank you.

missing: Tensorflow framework library

Good morning,

I am on step 4 (Compile TF_OPS) and ran into an error. The message states: Could NOT find TensorFlow (missing: TensorFlow_FRAMEWORK_LIBRARY)

I installed tensorflow via pip. Is that the incorrect way to do it? I am running Ubuntu 18.04. Attached is a screenshot of the error. Thank you so much!

open3d_error

open3d version

Is there any description about open3d version for this item

Free(): invalid pointer

When I run train.py,this error occured!

free(): invalid pointer

Process finished with exit code 134 (interrupted by signal 6: SIGABRT)

How to evaluate KITTI ?

Hi,

Thanks for sharing this demo, it's an amazing work !

In the kitti_predict.py, I found it only contains two functions, predicting the label and visualizing prediction results.
I didn't find the ground truth label of KITTI, Could you tell me where can I find it?
I only found this :

     # Load label. In pure test set, fill with zeros.
     self.labels = np.zeros(len(self.points)).astype(bool)

Thank you for your help.

[question] Choice of using FLANN

Hello,

Great work on Open3D version 0.5.0
I was just wondering what your motivation was to use FLANN rather than use i.e. an octree data-structure to run the ThreeNN operations directly on the GPU, which has been shown in many contexts to run faster? Was it simply the ease of using already an available library, or something to do with performance?

Best!

Can this code be used to train the S3DIS dataset?

Thank you for your excellent code, I want to perform the indoor data segmentation, so I need to train the model with S3DIS dataset. How can I modify this code? Is it only necessary to make changes in the data preparation phase?

system cannot find the file specified

I downloaded and extrcated the dataset.
I am getting this error when trying to run preprocess.py

C:\Users\PNAGARAJ\Lidar\Open3D-PointNet2-Semantic3D-master
file C:\Users\PNAGARAJ\Lidar\Open3D-PointNet2-Semantic3D-master\dataset\semantic_raw\bildstein_station1_xyz_intensity_rgb.txt
txt: C:\Users\PNAGARAJ\Lidar\Open3D-PointNet2-Semantic3D-master\dataset\semantic_raw\bildstein_station1_xyz_intensity_rgb.txt
pts: C:\Users\PNAGARAJ\Lidar\Open3D-PointNet2-Semantic3D-master\dataset\semantic_raw\bildstein_station1_xyz_intensity_rgb.pts
Traceback (most recent call last):
File "C:/Users/PNAGARAJ/Lidar/Open3D-PointNet2-Semantic3D-master/preprocess.py", line 69, in
point_cloud_txt_to_pcd(raw_dir, file_prefix)
File "C:/Users/PNAGARAJ/Lidar/Open3D-PointNet2-Semantic3D-master/preprocess.py", line 49, in point_cloud_txt_to_pcd
prepend_line(pts_file, str(wc(txt_file)))
File "C:/Users/PNAGARAJ/Lidar/Open3D-PointNet2-Semantic3D-master/preprocess.py", line 11, in wc
["wc", "-l", file_name], stdout=subprocess.PIPE, stderr=subprocess.STDOUT
File "C:\Users\PNAGARAJ\AppData\Local\Continuum\anaconda3\envs\open3d\lib\subprocess.py", line 676, in init
restore_signals, start_new_session)
File "C:\Users\PNAGARAJ\AppData\Local\Continuum\anaconda3\envs\open3d\lib\subprocess.py", line 957, in _execute_child
startupinfo)
FileNotFoundError: [WinError 2] The system cannot find the file specified

segmentation_fault while running test_tf_ops.py

I am trying to run the test file but i keep getting this error:

`Consolidate compiler generated dependencies of target tf_sampling
[ 40%] Built target tf_sampling
Consolidate compiler generated dependencies of target tf_interpolate
[ 60%] Built target tf_interpolate
Consolidate compiler generated dependencies of target tf_grouping
[100%] Built target tf_grouping
user@ubuntu:/Desktop/Open3D-PointNet2-Semantic3D/tf_ops/build$ cd ..
user@ubuntu:
/Desktop/Open3D-PointNet2-Semantic3D/tf_ops$ sudo python3.6 test_tf_ops.py
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
Running tests under Python 3.6.15: /usr/bin/python3.6
[ RUN ] TestGrouping.test
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/tensor_util.py:538: DeprecationWarning: tostring() is deprecated. Use tobytes() instead.
tensor_proto.tensor_content = nparray.tostring()
32 512 3 128
Tensor("Const_1:0", shape=(32, 512, 3), dtype=float32, device=/device:GPU:0) (32, 1, 512, 3)
Tensor("Sum:0", shape=(32, 128, 512), dtype=float32, device=/device:GPU:0) 64
Fatal Python error: Segmentation fault

Current thread 0x00007f55b910c740 (most recent call first):
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py", line 1864 in _create_c_op
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py", line 2027 in init
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py", line 3616 in create_op
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/util/deprecation.py", line 507 in new_func
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/op_def_library.py", line 788 in _apply_op_helper
File "", line 183 in selection_sort
File "/home/user/Desktop/Open3D-PointNet2-Semantic3D/tf_ops/tf_grouping.py", line 40 in select_top_k
File "/home/user/Desktop/Open3D-PointNet2-Semantic3D/tf_ops/tf_grouping.py", line 84 in knn_point
File "test_tf_ops.py", line 23 in test
File "/usr/lib/python3.6/unittest/case.py", line 605 in run
File "/usr/lib/python3.6/unittest/case.py", line 653 in call
File "/usr/lib/python3.6/unittest/suite.py", line 122 in run
File "/usr/lib/python3.6/unittest/suite.py", line 84 in call
File "/usr/lib/python3.6/unittest/suite.py", line 122 in run
File "/usr/lib/python3.6/unittest/suite.py", line 84 in call
File "/usr/lib/python3.6/unittest/runner.py", line 176 in run
File "/usr/local/lib/python3.6/dist-packages/absl/testing/_pretty_print_reporter.py", line 86 in run
File "/usr/lib/python3.6/unittest/main.py", line 256 in runTests
File "/usr/lib/python3.6/unittest/main.py", line 95 in init
File "/usr/local/lib/python3.6/dist-packages/absl/testing/absltest.py", line 2537 in _run_and_get_tests_result
File "/usr/local/lib/python3.6/dist-packages/absl/testing/absltest.py", line 2569 in run_tests
File "/usr/local/lib/python3.6/dist-packages/absl/testing/absltest.py", line 2156 in _run_in_app
File "/usr/local/lib/python3.6/dist-packages/absl/testing/absltest.py", line 2049 in main
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/platform/googletest.py", line 55 in g_main
File "/usr/local/lib/python3.6/dist-packages/absl/app.py", line 258 in _run_main
File "/usr/local/lib/python3.6/dist-packages/absl/app.py", line 312 in run
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/platform/app.py", line 40 in run
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/platform/googletest.py", line 64 in main_wrapper
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/platform/benchmark.py", line 407 in benchmarks_main
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/platform/googletest.py", line 65 in main
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/platform/test.py", line 64 in main
File "test_tf_ops.py", line 137 in
*** Received signal 11 ***
*** BEGIN MANGLED STACK TRACE ***
/usr/local/lib/python3.6/dist-packages/tensorflow/python/../libtensorflow_framework.so.1(+0xfd02db)[0x7f552825b2db]
/lib/x86_64-linux-gnu/libc.so.6(+0x43090)[0x7f55b931b090]
/lib/x86_64-linux-gnu/libc.so.6(gsignal+0xcb)[0x7f55b931b00b]
/lib/x86_64-linux-gnu/libc.so.6(+0x43090)[0x7f55b931b090]
/home/user/Desktop/Open3D-PointNet2-Semantic3D/tf_ops/build/libtf_grouping.so(+0x7c18)[0x7f5579227c18]
/home/user/Desktop/Open3D-PointNet2-Semantic3D/tf_ops/build/libtf_grouping.so(ZNSt17_Function_handlerIFN10tensorflow6StatusEPNS0_15shape_inference16InferenceContextEEPS5_E9_M_invokeERKSt9_Any_dataOS4+0x26)[0x7f5579228b56]
/usr/local/lib/python3.6/dist-packages/tensorflow/python/../libtensorflow_framework.so.1(_ZN10tensorflow15shape_inference16InferenceContext3RunERKSt8functionIFNS_6StatusEPS1_EE+0x4d)[0x7f5527f5ed1d]
/usr/local/lib/python3.6/dist-packages/tensorflow/python/_pywrap_tensorflow_internal.so(_ZN10tensorflow12ShapeRefiner10RunShapeFnEPKNS_4NodeEPKNS_18OpRegistrationDataEPNS_24ExtendedInferenceContextE+0x230)[0x7f552f62d720]
/usr/local/lib/python3.6/dist-packages/tensorflow/python/_pywrap_tensorflow_internal.so(_ZN10tensorflow12ShapeRefiner7AddNodeEPKNS_4NodeE+0xcb8)[0x7f552f62f248]
/usr/local/lib/python3.6/dist-packages/tensorflow/python/_pywrap_tensorflow_internal.so(TF_FinishOperation+0x44a)[0x7f552cfe7f1a]
/usr/local/lib/python3.6/dist-packages/tensorflow/python/_pywrap_tensorflow_internal.so(+0x241bdb6)[0x7f552b058db6]
python3.6[0x50e9f5]
python3.6(_PyEval_EvalFrameDefault+0x1bd)[0x5110dd]
python3.6[0x50df84]
python3.6[0x50ecb6]
python3.6(_PyEval_EvalFrameDefault+0x1bd)[0x5110dd]
python3.6[0x50fae2]
python3.6(_PyFunction_FastCallDict+0x780)[0x50d970]
python3.6[0x59f271]
python3.6[0x546424]
python3.6(_PyObject_FastCallKeywords+0x4ae)[0x5abf5e]
python3.6[0x50ede5]
python3.6(_PyEval_EvalFrameDefault+0x109c)[0x511fbc]
python3.6[0x593ee2]
python3.6(PyObject_Call+0x43)[0x5ad083]
python3.6(_PyEval_EvalFrameDefault+0x1fb8)[0x512ed8]
python3.6[0x50fae2]
python3.6[0x50e15d]
python3.6[0x50ecb6]
python3.6(_PyEval_EvalFrameDefault+0x109c)[0x511fbc]
python3.6[0x50fae2]
python3.6[0x50e15d]
python3.6[0x50ecb6]
python3.6(_PyEval_EvalFrameDefault+0x109c)[0x511fbc]
python3.6[0x50fae2]
python3.6[0x50e15d]
python3.6[0x50ecb6]
python3.6(_PyEval_EvalFrameDefault+0x1bd)[0x5110dd]
python3.6[0x50df84]
python3.6[0x50ecb6]
python3.6(_PyEval_EvalFrameDefault+0x1bd)[0x5110dd]
python3.6[0x50df84]
python3.6[0x50ecb6]
python3.6(_PyEval_EvalFrameDefault+0x1bd)[0x5110dd]
python3.6[0x50df84]
python3.6[0x50ecb6]
python3.6(_PyEval_EvalFrameDefault+0x1bd)[0x5110dd]
python3.6[0x50fae2]
python3.6(_PyFunction_FastCallDict+0x780)[0x50d970]
python3.6[0x59f271]
python3.6(PyObject_Call+0x43)[0x5ad083]
python3.6(_PyEval_EvalFrameDefault+0x1fb8)[0x512ed8]
python3.6[0x50fae2]
python3.6(_PyFunction_FastCallDict+0x3bd)[0x50d5ad]
python3.6[0x59f271]
python3.6(PyObject_Call+0x43)[0x5ad083]
python3.6[0x5444f3]
python3.6(_PyObject_FastCallKeywords+0x5bc)[0x5ac06c]
python3.6[0x50ede5]
python3.6(_PyEval_EvalFrameDefault+0x1bd)[0x5110dd]
python3.6[0x50fae2]
python3.6(_PyFunction_FastCallDict+0x780)[0x50d970]
python3.6[0x59f271]
python3.6(PyObject_Call+0x43)[0x5ad083]
python3.6(_PyEval_EvalFrameDefault+0x1fb8)[0x512ed8]
python3.6[0x50fae2]
python3.6(_PyFunction_FastCallDict+0x3bd)[0x50d5ad]
python3.6[0x59f271]
python3.6(PyObject_Call+0x43)[0x5ad083]
python3.6[0x5444f3]
python3.6(_PyObject_FastCallKeywords+0x5bc)[0x5ac06c]
python3.6[0x50ede5]
python3.6(_PyEval_EvalFrameDefault+0x1bd)[0x5110dd]
python3.6[0x50fae2]
python3.6(_PyFunction_FastCallDict+0x780)[0x50d970]
python3.6[0x59f271]
python3.6(PyObject_Call+0x43)[0x5ad083]
python3.6(_PyEval_EvalFrameDefault+0x1fb8)[0x512ed8]
python3.6[0x50fae2]
python3.6(_PyFunction_FastCallDict+0x3bd)[0x50d5ad]
python3.6[0x59f271]
python3.6(PyObject_Call+0x43)[0x5ad083]
python3.6[0x5444f3]
python3.6(_PyObject_FastCallKeywords+0x5bc)[0x5ac06c]
python3.6[0x50ede5]
python3.6(_PyEval_EvalFrameDefault+0x1bd)[0x5110dd]
python3.6[0x50df84]
python3.6[0x50ecb6]
python3.6(_PyEval_EvalFrameDefault+0x1bd)[0x5110dd]
python3.6[0x50fae2]
python3.6[0x50e15d]
python3.6[0x50ecb6]
python3.6(_PyEval_EvalFrameDefault+0x1bd)[0x5110dd]
python3.6[0x50df84]
python3.6[0x50ecb6]
python3.6(_PyEval_EvalFrameDefault+0x1bd)[0x5110dd]
python3.6[0x50fae2]
python3.6(_PyFunction_FastCallDict+0x780)[0x50d970]
python3.6[0x59f271]
python3.6[0x546424]
python3.6[0x554a17]
python3.6(PyObject_Call+0x43)[0x5ad083]
python3.6(_PyEval_EvalFrameDefault+0x1fb8)[0x512ed8]
python3.6[0x50df84]
python3.6[0x50ecb6]
python3.6(_PyEval_EvalFrameDefault+0x1bd)[0x5110dd]
python3.6[0x50df84]
python3.6[0x50ecb6]
python3.6(_PyEval_EvalFrameDefault+0x1bd)[0x5110dd]
python3.6[0x50fae2]
python3.6[0x50e15d]
python3.6[0x50ecb6]
python3.6(_PyEval_EvalFrameDefault+0x1bd)[0x5110dd]
python3.6[0x50fae2]
python3.6[0x50e15d]
python3.6[0x50ecb6]
python3.6(_PyEval_EvalFrameDefault+0x109c)[0x511fbc]
python3.6[0x50df84]
python3.6[0x50ecb6]
python3.6(_PyEval_EvalFrameDefault+0x1bd)[0x5110dd]
python3.6[0x50df84]
python3.6[0x50ecb6]
python3.6(_PyEval_EvalFrameDefault+0x1bd)[0x5110dd]
python3.6[0x50fae2]
python3.6[0x50e15d]
python3.6[0x50ecb6]
python3.6(_PyEval_EvalFrameDefault+0x109c)[0x511fbc]
python3.6[0x50fae2]
*** END MANGLED STACK TRACE ***

*** Begin stack trace ***
tensorflow::CurrentStackTrace()

gsignal


std::_Function_handler<tensorflow::Status (tensorflow::shape_inference::InferenceContext*), tensorflow::Status (*)(tensorflow::shape_inference::InferenceContext*)>::_M_invoke(std::_Any_data const&, tensorflow::shape_inference::InferenceContext*&&)
tensorflow::shape_inference::InferenceContext::Run(std::function<tensorflow::Status (tensorflow::shape_inference::InferenceContext*)> const&)
tensorflow::ShapeRefiner::RunShapeFn(tensorflow::Node const*, tensorflow::OpRegistrationData const*, tensorflow::ExtendedInferenceContext*)
tensorflow::ShapeRefiner::AddNode(tensorflow::Node const*)
TF_FinishOperation


_PyEval_EvalFrameDefault


_PyEval_EvalFrameDefault

_PyFunction_FastCallDict


_PyObject_FastCallKeywords

_PyEval_EvalFrameDefault

PyObject_Call
_PyEval_EvalFrameDefault



_PyEval_EvalFrameDefault



_PyEval_EvalFrameDefault



_PyEval_EvalFrameDefault


_PyEval_EvalFrameDefault


_PyEval_EvalFrameDefault


_PyEval_EvalFrameDefault

_PyFunction_FastCallDict

PyObject_Call
_PyEval_EvalFrameDefault

_PyFunction_FastCallDict

PyObject_Call

_PyObject_FastCallKeywords

_PyEval_EvalFrameDefault

_PyFunction_FastCallDict

PyObject_Call
_PyEval_EvalFrameDefault

_PyFunction_FastCallDict

PyObject_Call

_PyObject_FastCallKeywords

_PyEval_EvalFrameDefault

_PyFunction_FastCallDict

PyObject_Call
_PyEval_EvalFrameDefault

_PyFunction_FastCallDict

PyObject_Call

_PyObject_FastCallKeywords

_PyEval_EvalFrameDefault


_PyEval_EvalFrameDefault



_PyEval_EvalFrameDefault


_PyEval_EvalFrameDefault

_PyFunction_FastCallDict



PyObject_Call
_PyEval_EvalFrameDefault


_PyEval_EvalFrameDefault


_PyEval_EvalFrameDefault



_PyEval_EvalFrameDefault



_PyEval_EvalFrameDefault


_PyEval_EvalFrameDefault


_PyEval_EvalFrameDefault



_PyEval_EvalFrameDefault

*** End stack trace ***
Aborted`

I am using the following environment:
Ubuntu Desktop 20.04
python3.6
tensorflow 1.14.0
cuda v10.1

how to get the pcd file

when i try to execute "python preprocess.py" ,i find when i transform the seg27_station1_intensity_rgb.txt into .pcd file ,the memory error occurs, i don't know how to solve it,thanks for helping, the internal memory of my computer is 16GB .

Down sampling error

Processing: bildstein_station1_xyz_intensity_rgb
Num points: 29697591
Num points after 0-skip: 9476296
Traceback (most recent call last):
File "downsample.py", line 97, in
voxel_size,
File "downsample.py", line 49, in down_sample
sparse_pcd, cubics_ids = open3d.voxel_down_sample_and_trace(
AttributeError: module 'open3d' has no attribute 'voxel_down_sample_and_trace'

If I use

sparse_pcd, cubics_ids = dense_pcd.voxel_down_sample_and_trace(voxel_size, min_bound, max_bound, False)

then

Processing: bildstein_station1_xyz_intensity_rgb
Num points: 29697591
Num points after 0-skip: 9476296
Traceback (most recent call last):
File "downsample.py", line 100, in
voxel_size,
File "downsample.py", line 53, in down_sample
sparse_pcd, cubics_ids = dense_pcd.voxel_down_sample_and_trace(voxel_size, min_bound, max_bound, False)
ValueError: too many values to unpack (expected 2)

But if I use:

sparse_pcd = cubics_idsdense_pcd.voxel_down_sample_and_trace(voxel_size, min_bound, max_bound, False)

it works, but I'm not sure were cubics_ids are stored. I know that sparse_pcd[0] is the down sampled point cloud, but I can't interpret sparse_pcd[1] and sparse_pcd[2].

interpolate.py rise Segmentation fault (core dumped)

After finishing testing the test dataset with pointnet++, there was an error with interpolate.py. I was working with a dataset that merged semantic3d and myself, with 10 label categories.
Segmentation fault (core dumped)

test_tf_ops.py failed. libtf_grouping.so: undefined symbol: _ZN10tensorflow12OpDefBuilder4AttrESs

When I was building tf_ops, I was able to build all of those .so files. However, when I ran test_tf_ops.py, it gave me this error.

$ python tf_ops/test_tf_ops.py 
2020-02-14 17:46:28.741274: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudart.so.10.1
Traceback (most recent call last):
  File "tf_ops/test_tf_ops.py", line 4, in <module>
    from tf_grouping import query_ball_point, group_point, knn_point
  File "/home/user/Open3D-PointNet2-Semantic3D/tf_ops/tf_grouping.py", line 9, in <module>
    os.path.join(BASE_DIR, "build", "libtf_grouping.so")
  File "/home/user/.conda/envs/my_env/lib/python3.6/site-packages/tensorflow/python/framework/load_library.py", line 61, in load_op_library
    lib_handle = py_tf.TF_LoadLibrary(library_filename)
tensorflow.python.framework.errors_impl.NotFoundError: /home/user/Open3D-PointNet2-Semantic3D/tf_ops/build/libtf_grouping.so: undefined symbol: _ZN10tensorflow12OpDefBuilder4AttrESs

ask for kitti demo

Hi, @yxlao ,

The provided code only guides how to deal with Semantic3D dataset. From your Open3D 0.5 introduction video, real-time inference in KITTI lidar dataset is presented. Would you also update the pipeline for KITTI lidar dataset?

THX!

Error about libraries

Hello everyone. I faced a error, when I running "make" step. How can I solve this problem? I'm using Google Colab.
Thank you.

The error message:

[ 10%] Linking CXX shared library libtf_sampling.so
/usr/bin/ld: cannot find -ltensorflow_framework
collect2: error: ld returned 1 exit status
CMakeFiles/tf_sampling.dir/build.make:119: recipe for target 'libtf_sampling.so' failed
make[2]: *** [libtf_sampling.so] Error 1
CMakeFiles/Makefile2:72: recipe for target 'CMakeFiles/tf_sampling.dir/all' failed
make[1]: *** [CMakeFiles/tf_sampling.dir/all] Error 2
Makefile:83: recipe for target 'all' failed
make: *** [all] Error 2

Prepare my own dataset

If I have to prepare my own dataset, how should I proceed?
Can anyone please guide me on that?

Downloaded set vs. Pre-processed set

Preprocessing tries to process this set of point clouds:

test_file_prefixes = [
"birdfountain_station1_xyz_intensity_rgb",
"castleblatten_station1_intensity_rgb",
"castleblatten_station5_xyz_intensity_rgb",
"marketplacefeldkirch_station1_intensity_rgb",
"marketplacefeldkirch_station4_intensity_rgb",
"marketplacefeldkirch_station7_intensity_rgb",
"sg27_station10_intensity_rgb",
"sg27_station3_intensity_rgb",
"sg27_station6_intensity_rgb",
"sg27_station8_intensity_rgb",
"sg28_station2_intensity_rgb",
"sg28_station5_xyz_intensity_rgb",
"stgallencathedral_station1_intensity_rgb",
"stgallencathedral_station3_intensity_rgb",
"stgallencathedral_station6_intensity_rgb",
]

However, downloaded set is this:

test_file_prefixes = [
"MarketplaceFeldkirch_Station4_rgb_intensity-reduced",
"StGallenCathedral_station6_rgb_intensity-reduced",
"sg27_station10_rgb_intensity-reduced",
"sg28_Station2_rgb_intensity-reduced",
]

[Problem]VS version

I am with VS2017+cuda9.0+cmake3.14, my cmake puts an error.
"Selecting Windows SDK version 10.0.17763.0 to target Windows 10.0.17134.
The CXX compiler identification is MSVC 19.16.27024.1
The CUDA compiler identification is unknown
Check for working CXX compiler: C:/Program Files (x86)/Microsoft Visual Studio/2017/Community/VC/Tools/MSVC/14.16.27023/bin/Hostx86/x64/cl.exe
Check for working CXX compiler: C:/Program Files (x86)/Microsoft Visual Studio/2017/Community/VC/Tools/MSVC/14.16.27023/bin/Hostx86/x64/cl.exe -- works
Detecting CXX compiler ABI info
Detecting CXX compiler ABI info - done
Detecting CXX compile features
Detecting CXX compile features - done
CMake Error at CMakeLists.txt:5 (project):
No CMAKE_CUDA_COMPILER could be found.
Configuring incomplete, errors occurred!"

My cmakeError.log shows details :" C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.16.27023\include\type_traits(603): error : expression must have a constant value [D:\pointcloud\Open3D-PointNet2-Semantic3D-master\Open3D-PointNet2-Semantic3D-master\tf_ops\build\CMakeFiles\3.14.0-rc2\CompilerIdCUDA\CompilerIdCUDA.vcxproj]"

And if I compile CompilerIdCUDA.vcxproj with vs2017 but choose vs2015(v140)'s platform toolset, it can be pass successfully without errors.Does that mean I should download vs2015?

downsample probelm

There is an error in downsample.py:
module 'open3d.open3d' has no attribute 'voxel_down_sample_and_trace'?

my open3d is installed form conda, and its version is 0.40.0.

How to change batch size ?

I tried to run "train.py"(process 5 train at "usage").
But, I get the following "ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape" error.

Shall I change the batch size to be small?
How to change batch size ?

[my environment]
gpu Geforce1050 ti
memory 16 GiB
swap 16 GiB
ubuntu 16.04
cuda 9.0
cudnn 7.5.0
[anaconda3]
python 3.6
tensorflow-gpu 1.12.0
scikit-learn 0.21.3
open3d-python 0.7.0.0


2019-08-25 14:54:06.964697: W tensorflow/core/common_runtime/bfc_allocator.cc:271] *******************************xx***
2019-08-25 14:54:06.964735: W tensorflow/core/framework/op_kernel.cc:1273] OP_REQUIRES failed at conv_ops.cc:446 : Resource exhausted: OOM when allocating tensor with shape[16,128,8192,1] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
Traceback (most recent call last):
File "/home/hiwasawa/anaconda3/envs/pointNet2_py36/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1334, in _do_call
return fn(*args)
File "/home/hiwasawa/anaconda3/envs/pointNet2_py36/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1319, in _run_fn
options, feed_dict, fetch_list, target_list, run_metadata)
File "/home/hiwasawa/anaconda3/envs/pointNet2_py36/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1407, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[16,128,8192,1] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[{{node fa_layer4/conv_2/Conv2D}} = Conv2D[T=DT_FLOAT, data_format="NCHW", dilations=[1, 1, 1, 1], padding="VALID", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](fa_layer4/conv_1/Relu, fa_layer4/conv_2/weights/read/_211)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

[[{{node gradients/layer2/conv2/BiasAdd_grad/BiasAddGrad/_433}} = _Recvclient_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_4838_gradients/layer2/conv2/BiasAdd_grad/BiasAddGrad", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "train.py", line 469, in
train()
File "train.py", line 437, in train
train_one_epoch(sess, ops, train_writer, stack_train)
File "train.py", line 243, in train_one_epoch
feed_dict=feed_dict,
File "/home/hiwasawa/anaconda3/envs/pointNet2_py36/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 929, in run
run_metadata_ptr)
File "/home/hiwasawa/anaconda3/envs/pointNet2_py36/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1152, in _run
feed_dict_tensor, options, run_metadata)
File "/home/hiwasawa/anaconda3/envs/pointNet2_py36/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1328, in _do_run
run_metadata)
File "/home/hiwasawa/anaconda3/envs/pointNet2_py36/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1348, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[16,128,8192,1] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[node fa_layer4/conv_2/Conv2D (defined at /home/hiwasawa/PointNet2/Open3D-PointNet2-Semantic3D-master/util/tf_util.py:186) = Conv2D[T=DT_FLOAT, data_format="NCHW", dilations=[1, 1, 1, 1], padding="VALID", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](fa_layer4/conv_1/Relu, fa_layer4/conv_2/weights/read/_211)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

[[{{node gradients/layer2/conv2/BiasAdd_grad/BiasAddGrad/_433}} = _Recvclient_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_4838_gradients/layer2/conv2/BiasAdd_grad/BiasAddGrad", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

Caused by op 'fa_layer4/conv_2/Conv2D', defined at:
File "train.py", line 469, in
train()
File "train.py", line 359, in train
bn_decay=bn_decay,
File "/home/hiwasawa/PointNet2/Open3D-PointNet2-Semantic3D-master/model.py", line 128, in get_model
scope="fa_layer4",
File "/home/hiwasawa/PointNet2/Open3D-PointNet2-Semantic3D-master/util/pointnet_util.py", line 323, in pointnet_fp_module
bn_decay=bn_decay,
File "/home/hiwasawa/PointNet2/Open3D-PointNet2-Semantic3D-master/util/tf_util.py", line 186, in conv2d
data_format=data_format,
File "/home/hiwasawa/anaconda3/envs/pointNet2_py36/lib/python3.6/site-packages/tensorflow/python/ops/gen_nn_ops.py", line 957, in conv2d
data_format=data_format, dilations=dilations, name=name)
File "/home/hiwasawa/anaconda3/envs/pointNet2_py36/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "/home/hiwasawa/anaconda3/envs/pointNet2_py36/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py", line 488, in new_func
return func(*args, **kwargs)
File "/home/hiwasawa/anaconda3/envs/pointNet2_py36/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3274, in create_op
op_def=op_def)
File "/home/hiwasawa/anaconda3/envs/pointNet2_py36/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1770, in init
self._traceback = tf_stack.extract_stack()

ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[16,128,8192,1] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[node fa_layer4/conv_2/Conv2D (defined at /home/hiwasawa/PointNet2/Open3D-PointNet2-Semantic3D-master/util/tf_util.py:186) = Conv2D[T=DT_FLOAT, data_format="NCHW", dilations=[1, 1, 1, 1], padding="VALID", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](fa_layer4/conv_1/Relu, fa_layer4/conv_2/weights/read/_211)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

[[{{node gradients/layer2/conv2/BiasAdd_grad/BiasAddGrad/_433}} = _Recvclient_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_4838_gradients/layer2/conv2/BiasAdd_grad/BiasAddGrad", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

Broken pipe error

Hi, all,

I got the following broken pipe error when the training came to the final epoch (epoch=499):

in epoch 498
max_epoch 500
**** EPOCH 498 ****
2019-02-26 05:38:35.735413
Progress: [##########] 100%mean loss: 0.082965
Overall accuracy : 0.991698
Average IoU : 0.963038
IoU of man-made terrain : 0.970558
IoU of natural terrain : 0.981679
IoU of high vegetation : 0.994769
IoU of low vegetation : 0.937876
IoU of buildings : 0.993800
IoU of hard scape : 0.939500
IoU of scanning artifact : 0.923614
IoU of cars : 0.962506
in epoch 499
max_epoch 500
**** EPOCH 499 ****
2019-02-26 05:39:40.278005
Progress: [##########] 100%mean loss: 0.077413
Overall accuracy : 0.992196
Average IoU : 0.962089
IoU of man-made terrain : 0.971449
IoU of natural terrain : 0.982647
IoU of high vegetation : 0.996347
IoU of low vegetation : 0.935048
IoU of buildings : 0.994343
IoU of hard scape : 0.937210
IoU of scanning artifact : 0.921430
IoU of cars : 0.958242
Process ForkPoolWorker-1:1:
Traceback (most recent call last):
  File "/root/anaconda3/envs/tf/lib/python3.6/multiprocessing/pool.py", line 125, in worker
    put((job, i, result))
  File "/root/anaconda3/envs/tf/lib/python3.6/multiprocessing/queues.py", line 347, in put
    self._writer.send_bytes(obj)
  File "/root/anaconda3/envs/tf/lib/python3.6/multiprocessing/connection.py", line 200, in send_bytes
    self._send_bytes(m[offset:offset + size])
  File "/root/anaconda3/envs/tf/lib/python3.6/multiprocessing/connection.py", line 397, in _send_bytes
    self._send(header)
  File "/root/anaconda3/envs/tf/lib/python3.6/multiprocessing/connection.py", line 368, in _send
    n = write(self._handle, buf)
BrokenPipeError: [Errno 32] Broken pipe

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/root/anaconda3/envs/tf/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
    self.run()
  File "/root/anaconda3/envs/tf/lib/python3.6/multiprocessing/process.py", line 93, in run
    self._target(*self._args, **self._kwargs)
  File "/root/anaconda3/envs/tf/lib/python3.6/multiprocessing/pool.py", line 130, in worker
    put((job, i, (False, wrapped)))
  File "/root/anaconda3/envs/tf/lib/python3.6/multiprocessing/queues.py", line 347, in put
    self._writer.send_bytes(obj)
  File "/root/anaconda3/envs/tf/lib/python3.6/multiprocessing/connection.py", line 200, in send_bytes
    self._send_bytes(m[offset:offset + size])
  File "/root/anaconda3/envs/tf/lib/python3.6/multiprocessing/connection.py", line 404, in _send_bytes
    self._send(header + buf)
  File "/root/anaconda3/envs/tf/lib/python3.6/multiprocessing/connection.py", line 368, in _send
    n = write(self._handle, buf)
BrokenPipeError: [Errno 32] Broken pipe
(tf) root@milton-ThinkCentre-M93p:/data/code8/Open3D-PointNet2-Semantic3D#

Is something configured wrong or just to ignore such error?

My environment is:
tensorflow 1.12
cuda 9.0 + cudnn 7.5

Error: macro "fmt" passed 2 arguments, but takes just 1 fmt(out, std::get<I>(tup));

I followed the steps given in README to build tf_ops, but when I tried to make tf_interpolate.so, it gave me this error. I'm wondering if Open3D's fmt library conflicts with absl's fmt.

$ make
[ 40%] Built target tf_sampling
[ 80%] Built target tf_grouping
[ 90%] Building CXX object CMakeFiles/tf_interpolate.dir/tf_interpolate.cpp.o
In file included from /home/usr/.conda/envs/myEnv/lib/python3.6/site-packages/tensorflow_core/include/absl/strings/str_join.h:59,
                 from /home/usr/.conda/envs/myEnv/lib/python3.6/site-packages/tensorflow_core/include/tensorflow/core/lib/core/errors.h:21,
                 from /home/usr/.conda/envs/myEnv/lib/python3.6/site-packages/tensorflow_core/include/tensorflow/core/framework/op.h:26,
                 from /home/usr/o3d-sem3d/Open3D-PointNet2-Semantic3D/tf_ops/tf_interpolate.cpp:8:
/home/usr/.conda/envs/myEnv/lib/python3.6/site-packages/tensorflow_core/include/absl/strings/internal/str_join_internal.h:267:30: error: macro "fmt" passed 2 arguments, but takes just 1
     fmt(out, std::get<I>(tup));
                              ^
make[2]: *** [CMakeFiles/tf_interpolate.dir/tf_interpolate.cpp.o] Error 1
make[1]: *** [CMakeFiles/tf_interpolate.dir/all] Error 2
make: *** [all] Error 2

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.