Coder Social home page Coder Social logo

dstl / stone-soup Goto Github PK

View Code? Open in Web Editor NEW
387.0 387.0 128.0 29.41 MB

A software project to provide the target tracking community with a framework for the development and testing of tracking algorithms.

Home Page: https://stonesoup.rtfd.io

License: MIT License

Python 99.95% Shell 0.05%

stone-soup's People

Contributors

a-acuto avatar benjaminfraser avatar campbell101 avatar carlson-j avatar cje20a avatar csherman-dstl avatar davekirkland avatar edwheelhouse-dstl avatar ekhunter123 avatar erogers-dstl avatar gawebb-dstl avatar hpritchett-dstl avatar idorrington-dstl avatar jjosborne-dstl avatar jmbarr avatar jswright-dstl avatar lflaherty-dstl avatar mharris-dstl avatar nperree-dstl avatar oharrald-dstl avatar orosoman-dstl avatar pacarniglia avatar rcgorman-dstl avatar richardvec avatar rjgreen-dstl avatar sdhiscocks avatar sglvladi avatar snaylor20 avatar spike-dstl avatar timothy-glover avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

stone-soup's Issues

Pytest for stonesoup/simulator/tests/test_detections.py fails sometimes

It has come to my attention that stonesoup/simulator/tests/test_detections.py seems to fail from time to time when running pytest.

When running locally this is not a major issue as re-running pytest will most likely succeed. However, this is particularly problematic when it occurs during CI, as build triggers require permissions, meaning that a new commit/force-push must be performed. This can cause confusion to new users.

Below is a copy of the observed CI debug message:

assert 377 < 374  +  where 377 = len({Clutter(state_vector=StateVector([[-775.78958849],\n             [3370.39113146]]), timestamp=datetime.datetime(2020, ...39.95067859]]), timestamp=datetime.datetime(2020, 1, 28, 23, 16, 51, 77688), measurement_model=None, metadata={}), ...})  +  and   374 = len({Clutter(state_vector=StateVector([[1253.82850346],\n             [1226.54466325]]), timestamp=datetime.datetime(2020, ...26.2516991 ]]), timestamp=datetime.datetime(2020, 1, 28, 23, 17, 31, 77688), measurement_model=None, metadata={}), ...})
transition_model1 = <stonesoup.simulator.tests.conftest.transition_model1.<locals>.TestTransitionModel object at 0x7f5b423966d0>
transition_model2 = <stonesoup.simulator.tests.conftest.transition_model2.<locals>.TestTransitionModel object at 0x7f5b3fc13950>
measurement_model = <stonesoup.simulator.tests.conftest.measurement_model.<locals>.TestMeasurementModel object at 0x7f5b3fc13dd0>
timestep = datetime.timedelta(seconds=10)

    def test_switch_detection_simulator(
            transition_model1, transition_model2, measurement_model, timestep):
        initial_state = State(
            np.array([[0], [0], [0], [0]]), timestamp=datetime.datetime.now())
        model_probs = [[0.5, 0.5], [0.5, 0.5]]
        groundtruth = SwitchOneTargetGroundTruthSimulator(
            transition_models=[transition_model1, transition_model2],
            model_probs=model_probs,
            initial_state=initial_state,
            timestep=timestep)
        meas_range = np.array([[-1, 1], [-1, 1]]) * 5000
    
        detector = SwitchDetectionSimulator(
            groundtruth, measurement_model, meas_range, clutter_rate=3,
            detection_probabilities=[0, 1])
    
        test_detector = SimpleDetectionSimulator(
            groundtruth, measurement_model, meas_range, clutter_rate=3,
            detection_probability=1
        )
    
        total_detections = set()
        clutter_detections = set()
        for step, (time, detections) in enumerate(detector):
            total_detections |= detections
            clutter_detections |= detector.clutter_detections
    
            # Check time increments correctly
            assert time == initial_state.timestamp + step * timestep
    
        test_detections = set()
        for step, (time, detections) in enumerate(test_detector):
            test_detections |= detections
    
        # Check both real and clutter detections are generated
        assert len(total_detections) > len(clutter_detections)
    
        # Check clutter is generated within specified bounds
        for clutter in clutter_detections:
            assert (meas_range[:, 0] <= clutter.state_vector.ravel()).all()
            assert (meas_range[:, 1] >= clutter.state_vector.ravel()).all()
    
        assert detector.clutter_spatial_density == 3e-8
    
>       assert len(total_detections) < len(test_detections)
E       assert 377 < 374
E        +  where 377 = len({Clutter(state_vector=StateVector([[-775.78958849],\n             [3370.39113146]]), timestamp=datetime.datetime(2020, ...39.95067859]]), timestamp=datetime.datetime(2020, 1, 28, 23, 16, 51, 77688), measurement_model=None, metadata={}), ...})
E        +  and   374 = len({Clutter(state_vector=StateVector([[1253.82850346],\n             [1226.54466325]]), timestamp=datetime.datetime(2020, ...26.2516991 ]]), timestamp=datetime.datetime(2020, 1, 28, 23, 17, 31, 77688), measurement_model=None, metadata={}), ...})

stonesoup/simulator/tests/test_detections.py:93: AssertionError

Testing of EKF won't work

Current versions of the EKF predictor and updater are tested using linear transition and observation models. As these (linear) models don't possess a .jacobian() function, any attempt to test them will fail. The tests need to be re-written with non-linear models.

Create PDA/JPDA tracker

  • Create multi-measurement hypothesis
  • Create joint/multi-measurement data associator
  • Create PDA Filter

Redesign base particle filter class.

Decision needs to be made regarding the particle filter construction

Currently: Particle types contain a state_vector and weight (and parent). These are wrapped in a ParticleState which is just a list. Particle predictors and updaters work by operating the (transition, measurement) functions on these state_vectors. This won’t work with transition/measurement functions that take State, rather than StateVector types.

This might be changed in either of two ways. Probably the easiest is to adjust Particle so as to take a State rather than StateVector. The problem with that is going to be a large amount of redundant logic. If the State contains stuff external to the state_vector (metadata, transformation functions, etc) that will be replicated needlessly for each particle in the ParticleState

Better would be to construct a ParticleState which is more than a list. Instead it’s an analogue of a base state which it “particlizes”, preserving the metadata and functions and merely replicating the state vector N times. It would carry functions which calculate weighted mean and covariance of particles.

Would affect/address #130 and #26.

Create meta-data aware data associator

The data associator to avoid associating detections to a track which have a meta-data field which doesn't match (e.g. colour).

ALTERNATIVELY, non-matching metadata can affect probability of association - metadata could be fraudulently or mistakenly misreported or could legitimately change over time (e.g. ship-specific identifier - MMSI), so non-matching metadata is not likely to be associated if there is a better choice, but can be associated if there are no better options.

ALSO, it might be nice to have "fuzzy" metadata matching - for example, if we are filtering on radar type as determined by broadcast frequency, we might want to filter on the frequency plus/minus expected variability.

Create GMPHD tracker

  • Create GMPHDPredictor

  • Create a test for GMPHDPredictor

  • Create GMPHDUpdater

  • Create a test for GMPHDUpdater

  • Create GaussianMixtureState

  • Create a test for GaussianMixtureState

  • Create GaussianMixtureStatePrediction

  • Create a test for GaussianMixtureStatePrediction

  • Create GaussianMixtureMeasurementPrediction

  • Create a test for GaussianMixtureMeasurementPrediction

  • Create a Jupyter Notebook for running GMPHD tracker

Add Probability Hypothesiser

Add a single probability hypothesiser. Existing Data Associators should support this, but tests should be added to ensure this.

Implementing expected likelihood particle filters (ELPF)

I'm keen to look implementing expected likelihood particle filters for Stone-Soup, which from what I understand combines the particle filter state estimation tracking with the data association techniques in PDA/JPDA [Link].

I see we have existing support in the Stone-Soup repository for both particle filters and PDA/JPDA, so I think it would be sensible to agree on a suitable implementation for ELPF for single-target and multi-target trackers. Interest and support with this is would be appreciated.

Documentation review

Prior to release, all code documentation needs review from code authors, and then an overall documentation review from community.

TODO:

  • Use of kwargs for component swappability.

Addition of Angle types breaks UKF functions

When UKF related functions, or more generally functions that perform matrix multiplication, (e.g sigma2gauss) are called on matrices of StateVector objects that contain an Angle type, the following error is experienced:


~\Anaconda3\lib\site-packages\stonesoup\functions.py in sigma2gauss(sigma_points, mean_weights, covar_weights, covar_noise)
    157     """
    158 
--> 159     mean = sigma_points@mean_weights[:, np.newaxis]
    160 
    161     points_diff = sigma_points - mean

TypeError: ufunc 'matmul' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe''

This is an issue identified by @sdhiscocks and myself during the Nelson Codeathon. A quickfix was applied by casting all occurrences of StateVector objects to np.float, e.g.:

mean = sigma_points.astype(np.float_)@mean_weights[:, np.newaxis]

However, the above comes at the cost of losing the benefits (e.g. angle-wrapping) provided by the Angle classes.

A suggested alternative was to cast such StateVector matrices to CovarianceMatrix objects before multiplying, however this was not feasible at the time due to cyclic import errors (potentially to be solved by #91).

Reading Track metadata becomes costly for Tracks with long state histories

Following experimentation with some large scale AIS datasets and it has been observed that continuously reading the Track metadata becomes a costly operation, especially as the state history of a Track grows. This due to the fact that the getter method has to iterate over all the states to yield the result as it can be seen here.

A potential fix can be applied by replacing the above lines as follows:

@property
def metadata(self):
...
    for state in reversed(self.states):
        if isinstance(state, Update)\
           and state.hypothesis.measurement.metadata is not None:
            metadata.update(state.hypothesis.measurement.metadata)
            break
...

Here is an example profiler output running the same code with the old and new versions:

Old

ncalls  tottime  percall  cumtime  percall filename:lineno(function)
4789021  184.180    0.000 1897.829    0.000 ...\stonesoup\stonesoup\types\track.py:31(metadata)

New

ncalls  tottime  percall  cumtime  percall filename:lineno(function)
4789021  103.760    0.000  399.821    0.000 ...\stonesoup\stonesoup\types\track.py:31(metadata)

However, by doing so, only the metadata fields of the latest detection are returned, which may not be desireable (e.g. in cases where the metadata fields vary between detections).

Seeing as metadata is becoming quite heavily used in StoneSoup, it may be worth having a different approach to storing/updating the Track metadata.

Implementing multiple models (GPB, IMM)

I'm keen to look at implementing support in Stone-Soup for proposing multiple models algorithms, such as GPB and IMM. I'm fairly familiar with the theory, but less so with the Stone-Soup repository and the GitHub platform. Some support/collaboration with this would be appreciated.

Add meta-data to detections

Include additional information on a detection (i.e. as a key/value mapping), beyond data in state vector/covariance.
Also extend CSV parser, so all fields are automatically added to meta-data.

Add ID to tracks

Also include a Track container/mapping which allows check if track already present and allows overwriting with latest track.

How about adding hypothesizer that supports gating

In my case(mechanical scanning radar), approximately 50 or more targets are in the region of surveillance simultaneously, which rendered GNN association impossible because of the combination explosion problem.

I think association algorithm speeds up by dividing surveillance region into several sections and only considering hypothesis which associates detections that are in nearby sections.

Implement a TimeRange class

Currently associations and metrics that exist over a period of time use inherit from a TimePeriod class with a start_timestamp and end_timestamp.

A more elegant solution would be a TimeRange class which contains the start and end times but also contains simple functions to return the length of the range and whether an given timestamp is within the range.

Create non-simple radar sensors

Current simple radar sensor is range bearing only. Need to include options for range bearing elevation plus extend to range rate

Draft framework interfaces

  • Decide on base component classes
  • Define abstract base classes for components
  • Document component classes
  • Decide on base data types
  • Define base types
  • Document data types

Create initial user interface

Should allow selection and configuration of components to generate configuration file. Ideally also interface with run manager (#3)

Test Review

Before release, all test cases will need review by code authors, including verifying code coverage.

JointHypothesis Property definition

JointHypothesis Property "hypotheses" is currently defined as a Hypothesis, when in reality it is dictionary with the form Track:Hypothesis. It is defined this way because then Hypothesis would import Track, and Track imports Update, and Update imports Hypothesis, which is a circular import. This occurs because the function "get_measurement_prediction()" somehow ended up in the Updater rather than the Predictor. When the function is correctly moved back to the Predictor, this issue can be resolved.

Verify NMEA reader

A branch called "nmea" has been added. This branch contains a class called NMEAReader, which is located in "reader/nmea.py" file. Additionally, this branch also contains facilities (decoder, parser, definitions, etc) to parse raw AIS messages in NMEA format. These facilities are all under "reader/aisutils" directory. There are several issues that need to be addressed:

  1. The facilities added to parse raw AIS messages do add some complexity to StoneSoup, which might be outside the scope of the project. However, to my knowledge there are no reliable AIS parsing libraries in pure Python (the most popular one on pip requires a C++ compiler, which is not so straightforward to get and install on Windows). So, do we keep the code under "reader/aisutils" as part of StoneSoup? Or do we release aisutils as a separate open-source library and add that as a requirement for StoneSoup?

  2. At the moment, NMEAReader class only reads AIS message types 1, 2, and 3, which are all Class A vessel position reports. Do we need to read any other types?

  3. I noticed that OpenSkyReader outputs detections in time ascending manner. Not sure if this because the source (OpenSky) provides detections in this manner, or the reader is designed to output detections in time sorted manner. Currently, the detections generated by the NMEAReader class are not guaranteed to be sorted because the raw AIS feed is not necessarily sorted in time. This means that the output of detections_gen can potentially look something like this:
    i. 1516233686, Detections Set A
    ii. 1516233696, Detections Set B
    iii. 1516233686, Detections Set C

The time value of output i. and output iii. above is same (1516233686), but the Detection sets are different. Is this okay? My guess is probably not, but I'm not sure how to fix this at the moment.

Increase allowed line length

Is it time to change to a longer line-length standard in Stone Soup?

The current 80 character lines seem to me (and a couple of others I have spoken) to negatively affect code readability. Many lines of code have to be wrapped several times and would be clearer on one or two lines.

From PEP8: "Some teams strongly prefer a longer line length. For code maintained exclusively or primarily by a team that can reach agreement on this issue, it is okay to increase the line length limit up to 99 characters, provided that comments and docstrings are still wrapped at 72 characters."

Bear in mind that was the position in 2001,and monitors have only got wider since then.
Switching to 99 chars, or even pycharm's default of 120 seems like a good idea to me.

The switch is a few-line, single-file push (either tox.ini, setup.cfg, .pep8 or .flake8) and is, of course, backward compatible. No old code needs to be changed, but new code would have more flexibility.

Do people have any comments?

Create Probabilistic Data Association (PDA) filter

Create the components necessary for the PDA filter, including a custom Kalman Updater and a MultipleHypothesis data type.

x Create multi-measurement hypothesis
x Create joint/multi-measurement data associator
x Create PDA Filter

Record metadata as part of Track

Record metadata (especially changes in metadata) as part of the Track. Example: what MMSI (ship-unique identifier) the ship is broadcasting at each detection.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.