virtual-vehicle / pointcloudset Goto Github PK
View Code? Open in Web Editor NEWEfficient analysis of large datasets of point clouds recorded over time
Home Page: https://virtual-vehicle.github.io/pointcloudset/
License: MIT License
Efficient analysis of large datasets of point clouds recorded over time
Home Page: https://virtual-vehicle.github.io/pointcloudset/
License: MIT License
Describe the bug
Plotting large point clouds (>500,000 points) seems to be taking up a lot of RAM (>8gb).
To Reproduce
Steps to reproduce the behavior:
Expected behavior
It should be stated up to which sizes can be handled reasonably or the program should provide options for visualization that make it easier, like random subsampling or spatial subsampling to uniform density.
Desktop (please complete the following information):
Operations on the dataset level require original_id.
original_id is rather specific for Ouster sensors. How to handle methods like max, mean over the whole dataset if no original_id is present.
Use VZ6000 las data for development and add it to the test data.
The name base
should be avoided since it would install or override the default environment - a name like pointcloudset
or pointcloudset-dev
would be a better solution.
pointcloudset/conda/environment.yml
Line 1 in 09f7bfe
The suite of tests looks good, but it could actually be made a lot more comprehensive and smaller with the use of parameterized test cases and test fixtures.
I had to make some minor adjustments to build the normal Dockerfile on my machine - I just wanted to ask what is the reasoning behind having a Dockerfile that only installs the dependencies and then having one more that uses the tgoelles/pointcloudset_base:latest
base image to install the actual tool on top - would you explain the reason for that? To me it seems like it would be a lot easier to just have one Dockerfile which contains the installed package.
Also, the paths in the main Dockerfile seem a bit off - is that due to the automated CI actions and how they transform the paths?
I am writing this issue as a JOSS reviewer. See more info here: openjournals/joss-reviews#3471.
The 'Working_with_kitti_dataset' gives the following error:
FileNotFoundError: [Errno 2] No such file or directory: '/pointcloudset/tutorial_notebooks/kitti_2011_09_26_drive_0002_synced.bag'
I am using the pointcloudset docker container. Since the python code downloads the data, I'd expect this to work 'out-of-the-box', without the need to configure the paths manually. The bag file seems to be missing entirely.
Vscode fails to discover the tests, due to some strange behaviour of rospy with logging,
vscode runs pytest --collect-only to find the tests.
This produces an error.
Work arround is:
Ignore the message and use the Python Test Explorer for Visual Studio Code extension.
See also the open issue:
pytest-dev/pytest#5502
I tried all the solutions there but none did work.
Another solution would be to get rid of rospy in the package, and use it only in the command line tool. This would mean no more direct loading of rosbag files.
use gh actions to create docker images for amd64 and arm64
Is your feature request related to a problem? Please describe.
make a basic animation of a dataset with dataset.animate.
Describe the solution you'd like
It would be great to have it interactive with a slider for the frames
Describe alternatives you've considered
maybe without the slider
Additional context
plotly express has an animate feature, but this does not fit the problem
A simply command line to extract one or more frames of a bagfile. The frame(s) should be saved as a .csv or .las file for further processing in cloudcompare etc.
Is your feature request related to a problem? Please describe.
Currently saving of Datasets which contain large (>a few GB, per frame) is not possible.
Describe the solution you'd like
the same handling as with Datasets from ROS bag files.
Describe alternatives you've considered
There is no real alternative.
Additional context
Splitting up individual frames with dask should be possible but need some restructuring of the whole read and write mechanisms.
Therefore CI tests are not working at the moment.
Seems like installing the pip dependencies takes forever and github actions stops after 6 hours.
Possible solutions
see tg_test_conda_pip branch for develpment
Is your feature request related to a problem? Please describe.
Absolute positioning of pointcloud
Describe the solution you'd like
use DGPS + IMU data recorded with ROS
Describe alternatives you've considered
using cloudcompare and other tools such as Riscan and export it as las file
Additional context
geopandas could be the solution
A good standard and guideline for changelogs and versioning
Sadly it is not possible to use the latest .mcap file format.
Here the whole Traceback:
Traceback (most recent call last): File "/pcd_intensities.py", line 64, in <module> pcd = pointcloudset.Dataset.from_file(rosfile_path,topic=topic) File "/opt/conda/lib/python3.10/site-packages/pointcloudset/dataset.py", line 114, in from_file res = DATASET_FROM_FILE[ext](file_path, ext=ext, **kwargs) File "/opt/conda/lib/python3.10/site-packages/pointcloudset/io/dataset/ros.py", line 117, in dataset_from_ros with Reader(bagfile.as_posix()) as reader: File "/opt/conda/lib/python3.10/site-packages/rosbags/rosbag2/reader.py", line 105, in __init__ raise ReaderError(f'Rosbag2 version {ver} not supported; please report issue.') rosbags.rosbag2.errors.ReaderError: Rosbag2 version 7 not supported; please report issue.
If possible, please add the support.
difference of two pointclouds. Calculate the volume differences
Since the package already has an environment.yaml file, adding a conda recipe and uploading it to conda forge would be a very nice and rather easy addition.
conda
folderIt is recommended that users with existing code upgrade to pandas 1.5.3 before they upgrade to pandas 2, and make sure their code does not generate FutureWarning or DeprecationWarning messages.
test if it runs with pandas 2.0
The statement of need focuses more on the advances in LIDAR technology than the problems in related software - only the fact that the technology is advancing doesn't justify the need for more software.
Why is accessing time-series point cloud data important? Why do the current packages not suffice for this task? How does this package circumvent these problems? How does pointcloudset combine pyntcloud and ROS to provide a better software solution?
Also, can you find any scientific publication for which this software would be beneficial or where you could demonstrate its purpose? Also, for which specific problem (i.e autonomous driving, object detection in point clouds) are you developing the software? If it has no specific purpose, how does it function as a general framework for various applications (clustering, general filtering, feature detection -> dense areas, planes, etc.) and can you give a simple example where such a general framework (in connection with the point cloud series aspect) can be used?
It happend that some directories did not contain the needed meta data files. This needs to be checked when writing to a database.
Maybe implement it as a new method for diff of 2 pointclouds.
cloud compare uses it
Libraries:
https://github.com/ssciwr/py4dgeo
https://www.cloudcompare.org/doc/wiki/index.php/M3C2_(plugin)
https://github.com/lwiniwar/kalman4d & paper: https://esurf.copernicus.org/preprints/esurf-2021-103/
When using a Ouster lidar with ROS, in some use cases it makes sense to record raw lidar packets instead of the standard Pointcloud2 messages as this saves computational resources and disk space while recording.
To work with the data it is currently necessary to replay the ROS bagfile and record the converted messages, which is cumbersome and error prone.
Therefore it would be a great feature to load the lidar packets directly into pointcloudset. In principle, this should be possible with the Python Ouster SDK.
For a discussion on the topic and a starting point to write a possible implementation see: ouster-lidar/ouster_example#524
Currently the notebooks are not updated by sphinx.
This is due to 2 reasons:
Github actions and notebook kernels.
Kernel "base" is not in the environment of the current github action for doc.
Plotly plots are not in the html documentation
There is only an empty spot. The problem is there on the local docker environment and also with github actions
Why?
To get rid of the problematic ROS dependencies with cause:
Related to openjournals/joss-reviews#3471.
I'd recommend to add links to refer to some things mentioned in the readme. For example:
No default tag latest
is defined for the https://hub.docker.com/repository/docker/tgoelles/pointcloudset image, so a docker pull tgoelles/pointcloudset
will not work. A tag would need to be specified by the user. Is this done intentionally? In that case I think that it would be good to write the exact docker pull command in the readme.
The Quickstart example shows a code snippet that refers to rosbag_file.bag
and lasfile.las
. Are these files available somewhere? Would it be possible to include a quickstart example with existing example files - and ideally - some example output that includes the visualizations as shown in the README?
From the repository contributions, it is not clear what was the contribution of all individual authors. A short paragraph stating who did what (for example A.B. developed the concepts, C.D. developed the software, E.F. wrote the manuscript) would provide clarity inside the manuscript.
in the tag v0.9.0 is v0.8.1 installed
reading ROS2 files
No difference for the user.
The ROS2 Support works great! Two things that would be good to change now:
Have you had a look at python-pcl? I personally haven't worked much with PCL, it is quite a beast in C++ so I usually prefer CloudCompare for that, but the python bindings of PCL might be interesting.
plotting with data shaders. Might be useful to get an overview.
Here are some hints how it works together with poorly: https://plotly.com/python/datashader/
Describe the solution you'd like
A plot method on Datasets maybe with differente views similar to CAD
Describe alternatives you've considered
make it simple to generate data shader plots
Additional context
@Grisly00 did some plots already
dataset.agg({"x" : ["min","max","mean","std"]}) of the testbag data does not work.
The same command works in pandas and is part of the documentation so it needs to be part of the test-suite and work as expected
Ouster and Velodyne
strongest, last. as seperate pointcloud.
The auto generated structure is not in line with the actual order use of the package. It needs to be better structured.
It needs to be structured according to dataset and pointcloud objects.
Saving a dataset with an empty frame results in erros. Also an empty frame in a dataset results in a warning from pyntcloud
To Reproduce
Steps to reproduce the behavior:
Apply a function which results in an empty frame and try to save it
warnings from pyntcloud
opt/conda/lib/python3.8/site-packages/pyntcloud/core_class.py:670: RuntimeWarning:
Mean of empty slice.
repeated many times for a single frame
Expected behavior
A clear and concise description of what you expected to happen.
Dataset can handle empty frames and can also read and write them
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.