Coder Social home page Coder Social logo

Comments (18)

Andlon avatar Andlon commented on May 29, 2024 1

I am not able to look into this now, but would you please be willing to provide the exact code you're using, to make sure we can reproduce it when someone is able to look at it?

from nalgebra.

Andlon avatar Andlon commented on May 29, 2024 1

Thanks for your investigation @JulianKnodt!

Perhaps to settle the matter; would someone be willing to compute the determinant with exact arithmetic? Perhaps some symbolic toolkit like Maple/Mathematica is able to do that? Because right now we have two different answers, and it's not entirely clear to me which is one is "more correct". It would be nice to settle the matter with confidence.

In any case, I will note that determinants are terribly sensitive for nearly-singular matrices. They should not be used to check for singularity checking in virtually any circumstance (unless you have a specialized algorithm like a geometric exact predicate). I'm inclined to close this, but it would be nice to confirm with exact arithmetic first, perhaps?

from nalgebra.

JulianKnodt avatar JulianKnodt commented on May 29, 2024 1

@glennDittmann one thing you can try is using rational numbers (such as https://docs.rs/num/latest/num/rational/index.html), which will take up arbitrary memory but have no precision loss

from nalgebra.

Andlon avatar Andlon commented on May 29, 2024 1

I think in Mathematica/Maple, all you might have to do is to replace e.g. 98.1 with 981/10. IIRC, these software packages will otherwise interpret decimal numbers as floating-point. There might be more elegant ways of dealing with this though (I'm sure there's some utility functions to convert a floating point number to a rational representation)

from nalgebra.

redxaxder avatar redxaxder commented on May 29, 2024 1

would someone be willing to compute the determinant with exact arithmetic?

Sure. This is a matrix where the determinant can be hand computed exactly.

Adding or subtracting one row of the matrix from another leaves the determinant unchanged. So as the first step, let's subtract the second-to-last row from the last row.

The result has a row of all zeros on the bottom.

┌                                                                                           ┐
│               6.8               8.9                 5            150.45                 1 │
│                 0            100000           -100000       20000000000                 1 │
│                 2               8.7               3.9 94.89999999999998                 1 │
│               7.9               5.7               1.8             98.14                 1 │
│                 0                 0                 0                 0                 0 │
└                                                                                           ┘

Since this matrix has a zero row, the determinant is zero.

from nalgebra.

Andlon avatar Andlon commented on May 29, 2024 1

@redxaxder: hah, I didn't actually look at the matrix itself - indeed it has redundant rows. That certainly settles it. Thanks!

I don't think there's something inherently wrong here with nalgebra. The determinant value is reasonable given that it has components with enormous scale, as @JulianKnodt points out. Why numpy gets this right I don't know - it could be a coincidence or they have a more accurate computation of the determinant. I'm inclined to close though, as in my view nalgebra appears to be computing a result that is accurate in relative terms given the finite precision we have here. I'll leave it open to see if anyone else has a different opinion.

from nalgebra.

glennDittmann avatar glennDittmann commented on May 29, 2024

Sure

let a = Vector4::new(6.8, 8.9, 5.0, 0.0);
let b = Vector4::new(2.0, 8.7, 3.9, 0.0);
let c = Vector4::new(7.9, 5.7, 1.8, 0.0);
let d = Vector4::new(0.0, 100000.0, -100000.0, 0.0);

let matrix = Matrix5::new(
    a.x,
    a.y,
    a.z,
    a.x * a.x + a.y * a.y + a.z * a.z - a.w,
    1.0,
    b.x,
    b.y,
    b.z,
    b.x * b.x + b.y * b.y + b.z * b.z - b.w,
    1.0,
    c.x,
    c.y,
    c.z,
    c.x * c.x + c.y * c.y + c.z * c.z - c.w,
    1.0,
    d.x,
    d.y,
    d.z,
    d.x * d.x + d.y * d.y + d.z * d.z - d.w,
    1.0,
    c.x,
    c.y,
    c.z,
    c.x * c.x + c.y * c.y + c.z * c.z - c.w,
    1.0,
);
println!("Matrix det: {}", matrix.determinant());

I tried it also with hardcoded values, with the same results.

from nalgebra.

JulianKnodt avatar JulianKnodt commented on May 29, 2024

Currently, the reason your approach gets the value incorrect is because the matrix is so poorly conditioned (the norm of the matrix is 20000000000).

What nalgebra currently does is compute the determinant directly through the LU decomposition, which is numerically unstable because of the high matrix norm.

Numpy currently computes the sign and log of the determinant, which is much more numerically stable. This uses the same Probably the calculator just calls some library that does this for them.

Internally, numpy calls this LAPACK function https://www.netlib.org/lapack/explore-html/dd/d9a/group__double_g_ecomputational_ga0019443faea08275ca60a734d0593e60.html,
which identifies when there is an exact zero.

Reference:
https://www.nicolasboumal.net/papers/MAT321_Lecture_notes_Boumal_2019.htm

Now what can be done about this? First, if you can reduce the scale of your computation, then it may be more numerically stable. Depending on how your simulation works, the points could be bounded (point with 10k magnitude is a huge difference compared to those in a closer range)

Probably nalgebra can also better handle exact zeros and numerical instability, but computing the determinant is unstable, so you should probably use an epsilon to identify if it's on the boundary, and not directly compare with 0.

from nalgebra.

glennDittmann avatar glennDittmann commented on May 29, 2024

Hi, @JulianKnodt,
thanks for your reply, interesting indeed!
Of course I am considering instability of numerical computations when comparing floating point values. I just saw this and was curios.
I also had erros in the order of 1.e-4, which seems a little high for factoring these out by an epsilon, when the result could be verified as 0.0 in general.

Is there a specific reason nalgebra does it this way? Otherwise I think doing it the "LAPACK way" could still be benefitial. Or either let the user choose the method via flag, just a rough thought.

from nalgebra.

JulianKnodt avatar JulianKnodt commented on May 29, 2024

@glennDittman I think the primary reason that nalgebra currently doesn't compute the log determinant is that it's more work to implement, I'm not entirely sure if it's easy to add or not, and for most well-conditioned matrices, it's fine to just compute the standard determinant

That's fair, 1e-4 could be considered large, but if all your points are around 1e5 then I'd say it's on the smaller side.
I would also note that "exactly 0.0" is not quite true, I assume it's when the values underflows (i.e. 1e-10000) or is exactly 0, it converts it to a 0

from nalgebra.

Andlon avatar Andlon commented on May 29, 2024

Just want to add that if you actually want this operation to reliably give you the correct result, you probably want to take a look at Shewchuk's Robust Predicates.

Perhaps nalgebra ought to compute the determinant via log - I'm not sure if there are any downsides to this. I can imagine it's more expensive but should be negligible compared to the factorization itself, except possibly for some very small matrix sizes (but for smaller than 4 we anyway use an explicit formula IIRC.. Are you aware of any downsides to the logarithm approach, @JulianKnodt?

from nalgebra.

JulianKnodt avatar JulianKnodt commented on May 29, 2024

@Andlon I don't particularly know of any downsides (also I don't know how specifically it's implemented but I can't imagine it being more expensive), maybe it could lose precision farther from 0, but generally if it's farther from 0 it wouldn't be bad to get it a bit off

from nalgebra.

JulianKnodt avatar JulianKnodt commented on May 29, 2024

Hm actually I implemented the logarithm of the determinant, but it doesn't make the output more accurate. It could be that the pivot is not optimal? Will continue to investigate

Edit: I've unconvinced myself that the correct value is 0, after checking wolfram alpha

It might be the case that numpy has an incorrect value. Funnily enough, if you use f32, then you get a value of 0.

from nalgebra.

glennDittmann avatar glennDittmann commented on May 29, 2024

@JulianKnodt wow thanks for going the extra mile.
Interesting also that wolfram alpha has basically the same result as nalgebra. From theory it should still be 0.0.

Should we close this one then or leave it open for further discussion?

from nalgebra.

glennDittmann avatar glennDittmann commented on May 29, 2024

I am just setting up both maple and mathematica on my machine. Will update this comment later with results.

Edit:
maple is computing det = 0.
mathematica is computing det = -0.0000125858 as in the webtool linked from @JulianKnodt above. However with the warning: Result for Det of badly conditioned matrix [...] may contain significant numerical errors.

from nalgebra.

glennDittmann avatar glennDittmann commented on May 29, 2024

Hmm.. .determinant() has some traits that are not satifified by Rational64 out of the box.
Otherwise, thanks for the input, will maybe try again in the future.

from nalgebra.

JulianKnodt avatar JulianKnodt commented on May 29, 2024

er well what I mean is that you should use rational numbers (or big floats or whatever the equivalent is) in maple or mathematica. Those have infinite precision and thus will give you the exact answer. I believe LU should be rational since it's only mul/divisions and additions.

from nalgebra.

glennDittmann avatar glennDittmann commented on May 29, 2024

@redxaxder Thanks ! I wanted to point this out as well.
@Andlon Yes, I see you're point here.
However I think IF there is a reasonable efficient technique, that can detect these kinds of matrices and early out with the result (if I understood correctly that is what numpy / lapack are doing), it might still be valuable to remember for the roadmap / a future implementation.

from nalgebra.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.