Comments (17)
On v1.6.7 I get
julia> -log10(maximum(abs.(sdiff))) # < 3
15.175451669208671
on v1.9.4 and v1.8.5
julia> -log10(maximum(abs.(sdiff))) # < 3
2.233375639545474
so this must be an "old" bug.
from julia.
reproduced on x86 linux. The issue appears to be in log
(although it's somewhat hard to prove because matrix logs are not unique).
from julia.
So this code is a very direct matlab translation with all the oddness that entails, but what I've found is that if we make it so s,m = 1,4
rather than 0,5
we get the right answer. see #54867
from julia.
Could you please quickly check which one seems to be wrong: log
or exp
?
from julia.
A simpler example:
N = [0.0 -0.8 1.1 0.2 -0.2; 0.8 0.0 0.8 -0.4 -0.1; -1.1 -0.8 0.0 0.4 -0.1; -0.2 0.4 -0.4 0.0 0.0; 0.0 0.0 0.0 0.0 0.0]
k = 10 # also bad for k = 100 or 1000
sdiff = log(exp(N/k)) - N/k
@show -log10(maximum(abs.(sdiff))) # 2.9
I agree with @oscardssmith , I suspect the matrix logarithm to be at fault.
Edit: for completeness, the Python code
N = np.array([[0.0, -0.8, 1.1, 0.2, -0.2], [0.8, 0.0, 0.8, -0.4, -0.1], [-1.1, -0.8, 0.0, 0.4, -0.1], [-0.2, 0.4, -0.4, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0, 0.0],])
k = 10
sdiff = sp.linalg.logm(sp.linalg.expm(N/k)) - N/k
print(-np.log10(np.max(np.abs(sdiff)))) # 15.0
from julia.
This second example isn't as good because iiuc, log(exp(M)) == M
isn't expected for matrices do to branch cuts.
from julia.
This second example isn't as good because iiuc,
log(exp(M)) == M
isn't expected for matrices do to branch cuts.
That's why I included the parameter k
. If M
is small enough, I would expect log(exp(M)) == M
to be true no matter what, right?
from julia.
so this must be an "old" bug.
I suspect the problem to be in log_quasitriu
, which was introduced in 6913f9c, in 2021.
from julia.
I suspect the problem to be in log_quasitriu, which was introduced in 6913f9c, in 2021.
That's more recent than Julia v1.6 though
from julia.
More evidence that the problem is the logarithm:
N = [0.0 -0.8 1.1 0.2 -0.2; 0.8 0.0 0.8 -0.4 -0.1; -1.1 -0.8 0.0 0.4 -0.1; -0.2 0.4 -0.4 0.0 0.0; 0.0 0.0 0.0 0.0 0.0]
k = 10
M = exp(N/k)
sdiff = log(M*M) - 2*log(M)
@show -log10(maximum(abs.(sdiff))) # 2.6
The equivalent code in Python gives 14.7.
from julia.
I suspect the problem to be in log_quasitriu, which was introduced in 6913f9c, in 2021.
That's more recent than Julia v1.6 though
If it were the case that PR #39973 is to blame (wild guess), then the bug would have appeared first in v1.7.0.
from julia.
For completeness, I checked the code above (i.e., with log(M*M) - 2*log(M)
) with various versions, and indeed, the bug appears between v1.6.7 and v1.7.3.
My money is still on PR #39973.
from julia.
An even simpler test:
function test_log(N, k=10)
M = exp(N / k)
sdiff = log(M*M) - 2*log(M)
return -log10(maximum(abs.(sdiff)))
end
Now simply with
N = [0.0 -1.0 1.0; 1.0 0.0 0.0; 0.0 0.0 0.0]
run
test_log(N) # 2.1
and the same in Python returns 15.4.
from julia.
It's not clear to me what is the cause. I tried a specific test from scipy's test suite, and this is the result:
julia> VERSION
v"1.10.4"
julia> A = [3.2346e-01 3.0000e+04 3.0000e+04 3.0000e+04;
0.0000e+00 3.0089e-01 3.0000e+04 3.0000e+04;
0.0000e+00 0.0000e+00 3.2210e-01 3.0000e+04;
0.0000e+00 0.0000e+00 0.0000e+00 3.0744e-01];
julia> logA = [ -1.12867982029050462e+00 9.61418377142025565e+04 -4.52485573953179264e+09 2.92496941103871812e+14;
0.00000000000000000e+00 -1.20101052953082288e+00 9.63469687211303099e+04 -4.68104828911105442e+09;
0.00000000000000000e+00 0.00000000000000000e+00 -1.13289322264498393e+00 9.53249183094775653e+04;
0.00000000000000000e+00 0.00000000000000000e+00 0.00000000000000000e+00 -1.17947533272554850e+00];
julia> (logA - log(A)) / norm(logA)
4×4 Matrix{Float64}:
0.0 5.97008e-25 9.78138e-21 -1.06839e-15
0.0 0.0 4.97507e-25 1.30418e-20
0.0 0.0 0.0 2.98504e-25
0.0 0.0 0.0 0.0
julia> (exp(logA) - A) / norm(A)
4×4 Matrix{Float64}:
-1.81534e-8 0.000297088 50.5783 -2.79971e6
0.0 2.33977e-8 0.00116877 8.26782
0.0 0.0 3.53834e-10 -0.00160443
0.0 0.0 0.0 -3.34387e-8
julia> (exp(log(A)) - A) / norm(A)
4×4 Matrix{Float64}:
-1.81534e-8 0.000297088 50.5783 -2.79971e6
0.0 2.33977e-8 0.00116877 8.26782
0.0 0.0 3.53834e-10 -0.00160443
0.0 0.0 0.0 -3.34387e-8
julia> (exp(log(A)) - A)
4×4 Matrix{Float64}:
-0.001334 21.8314 3.71673e6 -2.05736e11
0.0 0.00171937 85.8868 6.07558e5
0.0 0.0 2.60014e-5 -117.901
0.0 0.0 0.0 -0.00245723
OTOH, we have:
>>> (sp.linalg.expm(A_logm) - A) / sp.linalg.norm(A)
array([[ 0.00000000e+00, 1.33667877e-15, 3.49612787e-10,
-5.04994630e-07],
[ 0.00000000e+00, 0.00000000e+00, 1.23766552e-15,
4.78558722e-11],
[ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
1.38618539e-15],
[ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00]])
So, the computed log is really close to the "known" solution, but the exp of it is completely wrong in the upper right corner.
ADDENDUM: It seems, however, that this specific example fails on all versions starting from 1.6.x.
from julia.
I tried a specific test from scipy's test suite
It's worth noting that the condition number of that matrix is awful:
julia> cond(A)
1.8884133342622014e20
julia> cond(logA)
4.086839610043594e28
I have an independent myexp
implementation (our actual exp
implementation is better but has limited type support #51008) and it similarly struggles with this:
function myexp(A)
# compute exp(A) == exp(A/2^n)^(2^n)
iszero(A) && return float(one(A))
# TODO: decide the best norm to use here: op-1, op-Inf, or frobenius are probably good candidates - maybe take the min of all three
nA = float(norm(A,2))
fr,ex = frexp(nA)
logscale = max(4 + ex,0) # ensure nA^(4+1)/factorial(4+1) < eps for 4 or whatever power our approximation is
p = ntuple(i->inv(prod(1.0:i-1)),Val(9))
scale = exp2(-logscale)
sA = A * scale
if sA isa AbstractMatrix
expA = evalpoly(sA,UniformScaling.(p))
else
expA = evalpoly(sA,p)
end
for _ in 1:logscale
expA = expA*expA
end
return expA
end
And with this and BigFloat
:
julia> norm(exp(logA) - A) / norm(A)
2.799714043033207e6
julia> norm(myexp(logA) - A) / norm(A) # myexp is generally worse
5.134964610902018e6
julia> norm(myexp(big.(logA)) - A) / norm(A) |> Float64 # despite being worse, BigFloat makes this version work
8.141481015780612e-7
Note that the squaring part of our exp
routine is probably very-much to blame, given that BigFloat
resolves that problem even though the exp kernel is not expanded beyond what is roughly needed for Float64
. I don't think we should use that example for this particular issue.
However, myexp
does nothing to resolve the original example:
julia> cond(M) # extremely well conditioned
1.1906431494194285
julia> norm(exp(log(M)) - M) / norm(M) |> Float64
0.003052024023511132
julia> norm(myexp(big.(log(M))) - M) / norm(M) |> Float64
0.0030520240235111305
Given that the matrix exponential is reasonably "simple" to compute (modulo stability issues, which even the literature has not fully settled), I am much more inclined towards expecting that log
is the culprit.
from julia.
Confirmation that log
is the culprit:
function eigapply(fun, A)
D,V = eigen(A)
return V * Diagonal(fun.(D)) / V
end
julia> norm(exp(log(M)) - M) / norm(M)
0.003052024023511132
julia> norm(exp(eigapply(log,M)) - M) / norm(M)
6.499342713344915e-16
from julia.
I think the problem is in log_quasitriu
. From the example above (i.e., qtriu = schur(exp(N/10)).T
):
qtriu = [0.9950041652780259 -0.09983341664682815 0.07153667745275474; 0.09983341664682804 0.9950041652780259 0.06981527929449967; 0.0 0.0 1.0]
Now
LinearAlgebra.log_quasitriu(qtriu)[:,end]
gives
0.07240564253650691
0.06891743457599916
0.0
and in Python, the same vector is
0.07496781758158656
0.06618025632357401
0.0
Pretty big discrepancy starting at the second decimal.
from julia.
Related Issues (20)
- Assertion failure inside rr causing test x86_64-linux-gnuassertrr to fail HOT 2
- Impossible values of allunique() and allequal() HOT 3
- `permute_dims` fails in edge case because of use if `1:length`
- Documentation example not behaving as displayed - behavior of "const" on immutable objects. HOT 1
- Make precise semantics of world-age increments in top-level thunks
- New precompilation crashes on Julia 1.11-rc1 HOT 15
- introduce a function to check whether a value can be represented as a type HOT 5
- Plots package doesn't work HOT 2
- suboptimal abstract return type inference involving `getindex(::Tuple)`
- Improvements to `JULIA_CPU_TARGET` and its documentation
- [WONTFIX] `fill()` is a footgun HOT 2
- The relative precedence of `::` and `where` is strangely context dependent HOT 3
- Allow Adding Runtime Precompiles to Module Image HOT 9
- Parsing Int32 bitstring fails for very negative number HOT 3
- SVD's `propertynames` method doesn't respect private argument.
- does `methodswith` always return public API methods in v1.11?
- TTFX regression with Comonicon CLI on Julia 1.11 HOT 2
- possible dispatch bug: method static parameter matching fails
- `LinearAlgebra` with `BigFloat` edgecases crash Julia HOT 3
- `make JULIA_PRECOMPILE=0` doesn't skip precompilation of stdlibs anymore HOT 6
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from julia.