Coder Social home page Coder Social logo

Comments (7)

cgobat avatar cgobat commented on May 27, 2024 1

Glad you found a workaround while I look into this. For reference, the 1/asymmetricResults_lambda operation should trigger/call asymmetricResults_lambda.__rtruediv__(other=1). It seems unlikely that Python would interpret it as multiplication instead, but I will investigate.

from asymmetric_uncertainty.

sevenstarknight avatar sevenstarknight commented on May 27, 2024

follow up, using asymmetricResults_M = asymmetricResults_lambda**-1.0 for now; returns results as expected

from asymmetric_uncertainty.

sevenstarknight avatar sevenstarknight commented on May 27, 2024

Would you be ok if I built a put a branch together with a unit test?

from asymmetric_uncertainty.

cgobat avatar cgobat commented on May 27, 2024

Absolutely! :)

from asymmetric_uncertainty.

sevenstarknight avatar sevenstarknight commented on May 27, 2024

So I don't think that errors should be flipped in division
pos = np.sqrt((self.plus/self.value)**2 + (other.minus/other.value)**2) * np.abs(result)
neg = np.sqrt((self.minus/self.value)**2 + (other.plus/other.value)**2) * np.abs(result)

Per Taylor, uncertainties in products and quotients propagate the same i.e. summation of the square of their relative errors.

from asymmetric_uncertainty.

sevenstarknight avatar sevenstarknight commented on May 27, 2024

same with


    def __sub__(self,other):
        if isinstance(other,type(self)):
            pass
        else:
            other = a_u(other,0,0)
        result = self.value - other.value
        pos = np.sqrt(self.plus**2 + other.minus**2)
        neg = np.sqrt(self.minus**2 + other.plus**2)

from asymmetric_uncertainty.

cgobat avatar cgobat commented on May 27, 2024

Upon looking at this further, I figured out what's going on and I think this is actually just an issue of expectations. As far as I can tell, the division and subtraction operations behave as designed/intended. The errors get flipped because a larger positive error in the denominator, for instance, should result in a larger error in the negative direction on the quotient/result. Consider the following simplified case:

from asymmetric_uncertainty import a_u
a = a_u(2., 1., 0.1) # 2.0 (+1.0, -0.1)
1/a # = 0.5 (+0.025, -0.25)

This is the current behavior, and is how I would argue things should be. In a, a positive error of 1.0 with a negative error of only 0.1 indicates that the "true value" of the quantity is much more likely to be closer to 3 than to, say, 1. When we divide 1 by a, we should expect that the result is more likely to be less than 0.5 than above it—hence the result's larger error in the negative direction. We can see the same principle in effect with subtraction:

b = a_u(5., 3., 0.3) # 5.0 (+3.0, -0.3)
15 - b # = 10.0 (+0.3, -3.0)

This result should make perfect sense when you consider the fact that -b is -5.0 (+0.3, -3.0).

Think of it like this: when computing each of the two asymmetric errors during subtraction/division operations, we have to consider which components of which errors contribute to making the result bigger or smaller. For division, the magnitude of the positive error on the numerator and that of the negative error on the denominator both contribute to an increase in the potential size of the quotient (because a larger numerator or a smaller denominator both mean a larger result); likewise, the numerator's negative error and the denominator's positive error both serve to make the quotient smaller.


uncertainties in products and quotients propagate the same i.e. summation of the square of their relative errors

Sure, agreed. Errors are propagated using the same formula for both multiplication and division, involving the summation of the relative errors in quadrature. However, that doesn't tell us anything about what to do with asymmetric errors, and where to use each error component in the formula. In this case, errors in the positive or negative directions have different effects on the result depending on which operand they belong to.


While I believe that subtraction and division are working as intended, you actually did make me aware of the fact that the behavior of exponentiation using negative powers is not consistent with this. asymmetricResults_lambda**-1.0 should yield the same result as 1/asymmetricResults_lambda. I will open a new issue to address this.

I am going to close this issue, but feel free to respond/re-open it if this didn't clear it up.

from asymmetric_uncertainty.

Related Issues (13)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.