Coder Social home page Coder Social logo

researchtools's Introduction

Research_tools

This project contains some blogs, ideas, reviews and news related to posiitoing and navigation via sensor/sensors fusion for autonomous systems, e.g. autonomous driving and unmanned aerial vehicles. Some interesting papers will be updated in the Issues and will be closed once the paper is read.

Content

Paper Reviewing and Code Implementation

Paper writing

Sensor Fusion

Coding

Industrial

Challenging Dataset

Laboratory in Navigation

NAV Lab at Standford Senseable City Lab at MIT ASPIN at University of California, Irvine Intelligent Positioning and Navigation Lab at PolyU
IRIS at ETHZ HKUST Aerial Robotics Group RAM Lab in HKUST Robot Perception Lab
Robust Field Autonomy Lab RPNG Lab at University of Delaware State Key Lab of CAD&CG at Zhejiang University Toronto AI Lab
StachnissLab Photogrammetry & Robotics WINS Lab Yuan Shen Qsinghua University Toronto AI Lab

Contact

researchtools's People

Contributors

weisongwen avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

researchtools's Issues

Displacement detection based on Bayesian inference from GNSS kinematic positioning for deformation monitoring

https://reader.elsevier.com/reader/sd/pii/S0888327021009067?token=7FDC5A5B6009807D8A5EADFE4892565DDB84B6D5FB49DA078AF0276E726CB6C3ADE4ED86CDE1A11A66C401647E7EEF0A&originRegion=us-east-1&originCreation=20211124010344

Displacement is an important parameter in engineering analysis in structural mechanics andgeomechanics. For decades, displacement detection based on the Global navigation satellitesystem (GNSS) has increasingly been important for a wide range of applications, from landslidemonitoring, subsidence survey, to industrial measurement. However, due to the influence ofmeasurement noise, it is still a challenge to identify and extract displacement from GNSSkinematic positioning results. To resolve this, we propose a novel displacement detectionapproach with the purpose of identifying and extracting displacement from GNSS kinematicpositioning. Specifically, we use the Bayesian inference to obtain the displacement changetime from the coordinate time series of GNSS kinematic positioning. By investigating theposterior distribution of the designed change point parameter, we can identify the changepoints. Furthermore, we derive the mean value from the posterior distribution of the meanparameter, and further obtain the displacement. Results from simulation and field experimentshave demonstrated the effectiveness and flexibility of the proposed method. For significantdisplacement, it can be clearly identified; for small displacement, it can be identified byadding an interval constraint prior. The accuracy of vertical displacement extraction from GNSSreal-time kinematic positioning can reach within 2 mm in 15 min.

Semantic Landmark-based HD Map Localization Using Sliding Window Max-Mixture Factor Graphs

https://ieeexplore.ieee.org/abstract/document/9565092/

Among many components in automated driving, localization is one fundamental task that provides the context for scene understanding and motion planning. This contribution focuses on localization in high-definition (HD) maps, which provide detailed information of the driving environment. An important problem in localization is the data association (DA) between measurements and landmarks in the HD map. While other approaches mainly use geometric measurement information as well as the most likely DA hypothesis, this contribution proposes a localization algorithm capable of handling DA ambiguities using semantic information in a sliding window factor graph. By incorporating a max-mixture scheme, the algorithm is able to recover from potentially false estimations. Furthermore, a realistic simulation employing the CARLA simulator is used to generate controlled scenarios and evaluate the performance of the proposed algorithm. The experiments suggest that the proposed approach is able to achieve accurate and robust pose estimations in the presence of measurement uncertainties.

Day20200719 to be updated

Sensors in Autonomous Vehicles: A Survey

https://watermark.silverchair.com/javs-21-1008.pdf?token=AQECAHi208BE49Ooan9kkhW_Ercy7Dm3ZL_9Cf3qfKAc485ysgAABCkwggQlBgkqhkiG9w0BBwagggQWMIIEEgIBADCCBAsGCSqGSIb3DQEHATAeBglghkgBZQMEAS4wEQQMa8KTqCEF1LhxZFMuAgEQgIID3IZAe4wqh5BD1bAIw66l6WPx4JeiPMY6nT8B_U9SETw-dQVfNeSRHY-VqZA4w6cYF7CNjDDEa4vCNYwuDgbq4yoAFUXj3ZGXWV5qZzk5r2K7aeCPyIPKcw80h2SB4G9NeFia02JwLKbqvRQipNyOPHWnnKbnCXrlnF1JATxfZRD1k2mhzUY5q91KfDyKgyraYG81DOcXtePFfnR5aGlrmZwCdy-_sI96QU21h-6xVw2atXCNW_UIDnmuIETwWJoyGIlXXCTk1Ezn20kFTPDGxaCwqYMp6JM3GkxF9hhQ1xwCJsG2wzLK6wfIX2Gyqj4h-n3_nQveGOHnEQ24jVvzTMsfKraIgHtHDnCqXEPlBpGaMuKzM814vnEJF3dZOcYCt1dnd4UJ1MBzUChB2x4ZkHN8fWqRcXv31fkdOyB0QegygXM38Mgq1iAOdGvMnRsi18DFwvNfhWTfnQk3Jzrld9wGj5wjpzoTNa4hVD7ac4WjqkiJi5tfu_EU4creCOWUWuOaTyRNds_uc-Z40p0Th5BUIBsVMa2GhzZxN5Bt5-BOZT50DYEsQml3i-n7LgqwCA0h1Ok6VoKYL7BuVWCigfu0Z18OY-wmXjbA0L_Mt3GL1m8oealrKUZ_QQN__Tuj3nmf9tZd2zni43zqpnYrrXxiDKgkB8rNgG_REOm747f3BCG27Zx_PQ3F7876dMpA81i6PkEt-UvyYP287u8eOShcYa5iUpt-8ktDtnOsMvchG2oaceVQXbiET6rhKrPTUOouuqUSkcAATqLiQ_hMZk3kcyM9ausxxpHuDoXaZLEWCVBUYhhDECJt9aYNbtrf8f1hu3ji0TizHbF4rtrsVaMenr_auqLDRxjI9AjY4B6IqXaWcOqvAcshxywHmacu45oeXsvK4Tc_wOug9le3RoTYylIJAjPMhihvsNCfX8UjPEUSef45QxZ5mckc2hp-DTyupOuFyRSNd_xHQe1ZvPXwhvB9UOjcTv0iTdUlquqIhkeNckazQ7_0XN811FNXnv1dDrHa2RFWrdfbwIsrzLb2pSBkZbdRrkZKqp-RtRKE456eJxQv84TU2in8_7HPLoxUqe-zKoddIGTaRmD1ccncFOz_w24W2QIrS1vydX8aL4oC2-TeRI7Ljt-wJmXbmClgQ5E5ztNXNaDV55DnEkGyLtR7QaRUilV1-5evtIcvWOqoHJqRIQs0t-SwgYcJe4tr3oqFBp7ERC_9miGnAyDuaqq1i1Z0UpoBtaLxWjD57xYFZ0EH2RooD1RBeEkPk817QScVu9K1bgilSi_F6MSG0avdqUV2CQ3se70

Online Spatial and Temporal Initialization for a Monocular Visual-Inertial-LiDAR System

https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9641839

Multisensor fusion of visual-inertial-LiDAR systems (VILSs) has become a research hotspot in recent years. Most VILSs require offline calibration of external parameters and hardware time synchronization to ensure successful initialization of the system. However, such requirements are not easy to achieve in practice. We propose an online initialization method for VILSs that can achieve software time synchronization (temporal) and automatically calculate external parameters (spatial) without prior knowledge of environmental information or special movements. Our approach consists of two phases: (1) novel monocular visual inertial (VI) initialization and (2) LiDAR-added (LI) online initialization. In phase 1, our method considers more complete initial states, including the time offsets, biases of the accelerometer, and translation between the inertial measurement unit (IMU) and camera. We implement faster and more robust VI initialization than VINS-MONO. In phase 2, we linearly interpolate the visual-inertial odometry (VIO) onto the same timestamps of the LiDAR and solve the LI external parameters by aligning the VIO with LiDAR odometry (LO). Our LI initialization phase is more practical due to the current lack of an online LI initialization method. Additionally, to eliminate the bundling effect of the initial state estimation in the two phases, we provide a linear solution in nominal-state space to obtain a rough starting value of the rotation extrinsic parameter first. Then, nonlinear optimization under motion constraints is introduced to obtain the full initial states in true-state space. The performance of the proposed method is verified on both a public dataset and a self-assembled handheld VILS device in real-world experiments.

A cliquey subgraph approach to sparsified UAV visual inertial odometry by surjective Bayes-tree-to-factor-graph mapping

https://www.sciencedirect.com/science/article/pii/S1270963821007604

A new method is proposed to improve trajectory estimation accuracy by visual inertial odometry (VIO) subject to information sparsification. Current practices assume that the result of sparsification, i.e. a subgraph of a factor graph, takes the form of a tree and impose mutually independencies upon its nodes. However, this oversimplification may undermine a close approximation to the complete information matrix and eventually results in excessive estimation errors. Therefore, we propose to use a cliquey subgraph with preserved mutual correlation between selected pairs of the nodes. Sparsification is further accelerated by the new discovery that connectivity between arbitrary nodes can be directly read from Bayes tree, by transforming its surjective mapping to the underlying factor graph to a bijective mapping through a downstream-upstream traversing operation. Results from public dataset EuRoC indicate that the proposed method, while maintaining low computational profile, achieves lower absolute trajectory error (ATE) than that of the existing ones in the easy- and hard-level sequences and performs equally well in medium-level sequences. This result renders the proposed method a potent candidate for power saving in visual navigation-purposed application specific integrated circuit (ASIC) chip designs.

Semi-analytical assessment of the relative accuracy of the GNSS/INS in railway track irregularity measurements

Semi-analytical assessment of the relative accuracy of the GNSS/INS in railway track irregularity measurements

An aided Inertial Navigation System (INS) is increasingly exploited in precise engineering surveying, such as railway track irregularity measurement, where a high relative measurement accuracy rather than absolute accuracy is emphasized. However, how to evaluate the relative measurement accuracy of the aided INS has rarely been studied. We address this problem with a semi-analytical method to analyze the relative measurement error propagation of the Global Navigation Satellite System (GNSS) and INS integrated system, specifically for the railway track irregularity measurement application. The GNSS/INS integration in this application is simplified as a linear time-invariant stochastic system driven only by white Gaussian noise, and an analytical solution for the navigation errors in the Laplace domain is obtained by analyzing the resulting steady-state Kalman filter. Then, a time series of the error is obtained through a subsequent Monte Carlo simulation based on the derived error propagation model. The proposed analysis method is then validated through data simulation and field tests. The results indicate that a 1 mm accuracy in measuring the track irregularity is achievable for the GNSS/INS integrated system. Meanwhile, the influences of the dominant inertial sensor errors on the final measurement accuracy are analyzed quantitatively and discussed comprehensively.

Efficient and Accurate Tightly-Coupled Visual-Lidar SLAM

https://ieeexplore.ieee.org/abstract/document/9632274

We investigate a novel way to integrate visual SLAM and lidar SLAM. Instead of enhancing visual odometry via lidar depths or using visual odometry as the motion initial guess of lidar odometry, we propose tightly-coupled visual-lidar SLAM (TVL-SLAM), in which the visual and lidar frontend are run independently and which incorporates all of the visual and lidar measurements in the backend optimizations. To achieve large-scale bundle adjustments in TVL-SLAM, we focus on accurate and efficient lidar residual compression. The visual-lidar SLAM system implemented in this work is based on the open-source ORB-SLAM2 and a lidar SLAM method with average performance, whereas the resulting visual-lidar SLAM clearly outperforms existing visual/lidar SLAM approaches, achieving 0.52% error on KITTI training sequences and 0.56% error on testing sequences.

Direct LiDAR Odometry: Fast Localization with Dense Point Clouds

https://arxiv.org/pdf/2110.00605.pdf

— Field robotics in perceptually-challenging environments require fast and accurate state estimation, but modern LiDAR sensors quickly overwhelm current odometry algorithms. To this end, this paper presents a lightweight frontend LiDAR odometry solution with consistent and accurate localization for computationally-limited robotic platforms. Our Direct LiDAR Odometry (DLO) method includes several key algorithmic innovations which prioritize computational efficiency and enables the use of dense, minimally-preprocessed point clouds to provide accurate pose estimates in real-time. This is achieved through a novel keyframing system which efficiently manages historical map information, in addition to a custom iterative closest point solver for fast point cloud registration with data structure recycling. Our method is more accurate with lower computational overhead than the current state-of-theart and has been extensively evaluated in several perceptuallychallenging environments on aerial and legged robots as part of NASA JPL Team CoSTAR’s research and development efforts for the DARPA Subterranean Challenge.

FasterGICP: Acceptance-Rejection Sampling Based 3D Lidar Odometry

https://ieeexplore.ieee.org/abstract/document/9599551

Distribution-to-distribution-based lidar odometry is known for its good accuracy, while it cannot run in real-time when the number of points is large. To alleviate this problem, Faster Generalized Iterative Closest Point (FasterGICP) is proposed in this letter, in which an acceptance-rejection sampling-based two-step point filter is proposed to exclude the points that rarely benefit the lidar odometry performance. Specifically, the lidar point cloud is firstly filtered and only the points with high planarity tend to be preserved, which can reduce the distribution approximation errors when the GICP works as a plane-to-plane Iterative Closest Point (ICP). Secondly, during the pose estimation optimization process, the lidar points are further iteratively filtered according to their contributions to the optimization objective function, in which the point's matching error defines the contribution. The two-step filtering process is achieved by designing the target and proposal distributions in the acceptance-rejection sampling framework. With the help of the point filter, our odometry can work in a scan-to-model strategy while demonstrating both efficiency and accuracy improvements. Extensive validation experiments are conducted on the public and our datasets. The results demonstrate that our method can achieve competitive performance compared with the state-of-the-art lidar odometry and Simultaneously Localization and Mapping (SLAM) methods. Our code has been made public available at https://github.com/SLAMWang/fasterGICP .

RailLoMer: Rail Vehicle Localization and Mapping with LiDAR-IMU-Odometer-GNSS Data Fusion

https://arxiv.org/ftp/arxiv/papers/2111/2111.15043.pdf

We present RailLoMer in this article, to achieve realtime accurate and robust odometry and mapping for rail vehicles. RailLoMer receives measurements from two LiDARs, an IMU, train odometer, and a global navigation satellite system (GNSS) receiver. As frontend, the estimated motion from IMU/odometer preintegration de-skews the denoised point clouds and produces initial guess for frame-to-frame LiDAR odometry. As backend, a sliding window based factor graph is formulated to jointly optimize multi-modal information. In addition, we leverage the plane constraints from extracted rail tracks and the structure appearance descriptor to further improve the system robustness against repetitive structures. To ensure a globally-consistent and less blurry mapping result, we develop a two-stage mapping method that first performs scan-to-map in local scale, then utilizes the GNSS information to register the submaps. The proposed method is extensively evaluated on datasets gathered for a long time range over numerous scales and scenarios, and show that RailLoMer delivers decimeter-grade localization accuracy even in large or degenerated environments. We also integrate RailLoMer into an interactive train state and railway monitoring system prototype design, which has already been deployed to an experimental freight traffic railroad.

What if there was no revisit? Large-scale graph-based SLAM with traffic sign detection in an HD map using LiDAR inertial odometry

https://link.springer.com/article/10.1007/s11370-021-00395-2

Accurate localization and mapping in a large-scale environment is an essential system of an autonomous vehicle. The difficulty of the previous LiDAR or LiDAR-inertial simultaneous localization and mapping (SLAM) methods is correcting long-term drift error in a large-scale environment. This paper proposes a novel approach of a large-scale, graph-based SLAM with traffic sign data involved in a high-definition (HD) map. The graph of the system is structured with the inertial measurement unit (IMU) factor, LiDAR-inertial odometry factor, map-matching factor, and loop closure factor. The local sliding window-based optimization method is employed for real-time processing. As a result, the proposed method improves the accuracy of the localization and mapping compared with the state-of-the-art LiDAR or LiDAR-inertial SLAM methods. In addition, the proposed method can localize accurately without revisit, required for conventional graph-based SLAM for graph optimization, unlike previous studies. The proposed method is intensively validated with a data set collected in a city where the Global Navigation Satellite System (GNSS) signal is unreliable and on a university campus.

AdaFusion: Visual-LiDAR Fusion with Adaptive Weights for Place Recognition

https://arxiv.org/pdf/2111.11739.pdf

Recent years have witnessed the increasing application of place recognition in various environments, such as city roads, large buildings, and a mix of indoor and outdoor places. This task, however, still remains challenging due to the limitations of different sensors and the changing appearance of environments. Current works only consider the use of individual sensors, or simply combine different sensors, ignoring the fact that the importance of different sensors varies as the environment changes. In this paper, an adaptive weighting visual-LiDAR fusion method, named AdaFusion, is proposed to learn the weights for both images and point cloud features. Features of these two modalities are thus contributed differently according to the current environmental situation. The learning of weights is achieved by the attention branch of the network, which is then fused with the multi-modality feature extraction branch. Furthermore, to better utilize the potential relationship between images and point clouds, we design a twostage fusion approach to combine the 2D and 3D attention. Our work is tested on two public datasets, and experiments show that the adaptive weights help improve recognition accuracy and system robustness to varying environments.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.