LiDAR-DenseSeg semantic segmentation left some incorrectly classified data points in the resulting point cloud. Attempts were made to remove these datapoints and mitigate errors, but a more thorough process is required to better differentiate noise from sparse wall points.
What has been tried so far:
Radial neighbourhood check for sparsity, removing points that have less than a given threshold of points within a local radial neighbourhood
Voxel sparsity check: checking neighbouring voxels in the same chunk and removing it if it is floating or otherwise only connected to 1-2 other points
Truncation error removal: Removing points that would have been quantized to a different voxel than n% of their neighbours. This was mildly successful.
What is likely the solution (Will attempt after completion of Forest Friends) is the following:
Initial point cloud simplification via Poisson Disk subsampling, with an explicit radius of around 1.0m (exactly like MeshLab's implementation)
Convert point cloud into a mesh (likely through ball pivoting algorithm but other options could be explored too)
Rejection of edges longer than a certain length could be useful
RANSAC-based plane fitting, completing what LiDAR-DenseSeg proposed
Open3d can provide some of these functions; current library installation issues present that are less pressing than Forest Friends completion.