This seems like the most fundamental feature, yet it is missing. Are you planning to implement this at one point, or should I be looking for a hack to get the coefficients from a Poly object?
Array(T, n) results in deprecation warnings. Better to change it to Vector{T}(n),
convert{T,S}(p::Poly{T}, s::S) should always convert s to Poly{T}, but not to Poly{promote_type(T,S)}. How could we, otherwise, have convert(Poly{Float64}, BigFloat(5)) do what we exactly want?
That would be great if we relaxed the requirements in .travis.yml file. Allowing for failures in nightly builds will help develop the package for supported environments in a smoother way. Problem here is that #98 and #99 are failing in nightly in a different place from those introduced in the corresponding PRs.
In discussion #117 the absence of eachindex came up. This should be added so that expressions such as [p[i] * variable(p)^i for i in eachindex(p)] work.
It is very confusing to see two very similar almost constructors Poly() and poly() which are quite different. I propose to rename poly() so that it will be evident what's it do. For example byroots() or fromroots() or something similar.
As is, it causes an issue with a heuristic used in the Roots package to test if a function is a polynomial function, as rational functions also get identified.
If this meets no objection, I'll put in a pull request with the change and update the tests.
Right now the only check in the package is T<:Number. Hence, we have NaNs and Infs resulting in weird behaviours. One example was in #78, where the polynomial evaluation was wrong.
We also have a strange behaviour when taking the derivatives.
If the package continues to support NaNs and Infs, I believe any operation on NaNs should produce NaN, including the derivative. Right now, derivative of p(x) = NaN is 0.0. Similarly, when p(x) = Inf, the derivative evaluates to 0.0, again. However, when we apply the definition (at any point), we get (Inf-Inf)/h. Numerator, there, is also NaN. I assume, derivative of Inf should yield NaN, instead.
Horner's scheme as it is implemented now gives wrong results.
Observed Behaviour
julia>using Polynomials
julia> p1 =Poly([Inf, Inf])
Poly(Inf+Inf⋅x)
julia>p1(Inf)
NaN
julia>p1(-Inf)
NaN
julia>p1(0)
NaN
julia> p2 =Poly([0, Inf])
Poly(Inf⋅x)
julia>p2(-Inf)
NaN
Desired Behaviour
julia>using Polynomials
julia> p1 =Poly([Inf, Inf])
Poly(Inf+Inf⋅x)
julia>p1(Inf)
Inf
julia>p1(-Inf)
NaN
julia>p1(0)
NaN
julia> p2 =Poly([0, Inf])
Poly(Inf⋅x)
julia>p2(-Inf)
-Inf
Alan sent me a copy of this which is a clever way to calculate eigenvalues of a companion matrix. They also provide their Fortran code which I have wrapped in an unregistered package, but I think it would make sense to merge it into this package. Any objections?
The package has some problems related to the conjugation of Poly objects. In my opinion, conj should not only return a new Poly with conjugated coefficients but also keep track of its input data. The solution proposal in #59 seems to be true only for real inputs.
What I would expect from conj is that conj(a*x) = conj(a)*conj(x). In fact, after obtaining conj of a Poly, the variable x has been also changed to y = conj(x). What I mean by that is conj(p(x)) + p(x) should not be allowed for a Poly object p(x) with the current implementation.
To me, the variable name together with its conjugation property should fix the variable information of a Poly object. Actually, I have implemented RationalFunctions.jl in this way, and that seems to be working.
The way I implemented RationalFunction objects was to encode both the variable name (in this case, p.var) and its conjugation property (for example, a value type such as Conj{true}) into the type itself. Then, I could rely on the dispatching mechanism of Julia for all mathematical operations without even caring about any if checks to guarantee same-variable-polynomial operations. I am not sure if this is the best practice, but it seems to be working.
I believe,
It provides safe operations without the need for explicitly checking in run-time if the corresponding rational functions (in this case, polynomials) are of the same-variable,
It allows for keeping track of conjugated variables,
It allows for efficient function calls (i.e., (r::RationalFunction{T,Conj{true}}){T}(x::Real) = _funceval(r, x)) via the dispatching mechanism,
It pollutes (a bit ?!) more the source code, and maybe makes it harder to read, and,
It might get slower if one needs several different variables and conjugation,
However, I doubt that a user will have a lot of variables. In such a case, one can also provide some precompilaton procedure for the usual suspects (such as :x, :t, :s, :q, etc.).
Anyways, in the current implementation of Polynomials, p.var is hardly ever used except for checking for consistent operations, and displaying purposes. If above functionality (i.e., tracking of conjugated variables) is to be implemented without breaking the coefficients' structure (in this case, Vector or T<:AbstractVector if sparse coefficients are going to be supported), the other viable option is to have a Bool in the type structure. Then, this will bring another if switch into the picture.
I just wanted to share my opinions about the issue, as I am interested in having RationalFunctions with real coefficients and complex inputs in my applications. Right now I seem to have solved it properly in the above-mentioned package. However, if anyone is interested in some brainstorming and/or unifying these representations so that both the packages could play well with each other, I am open to discussions.
Apart from this, I have also a couple of concerns regarding the convenient ctranspose for polyder, and the no-op transpose deprecation warning in Julia 0.5.
Some things worth noting/discussing from the above excerpt:
Convenient ctranspose propogates to matrix operations as polyder,
transpose(p) = copy(p) would be needed to suppress the no-op transpose warning,
here, transpose(p) = p is a faster (more efficient) solution, I know. However, to me, one should always return temporaries from mathematical operations as the result would be changed easily (and unintentionally) in cases such as p = Poly([1,1]); q = p.'; p[2] = 5 # q is unintentionally changed,
right now, Polynomials has this problem when taking the derivative (of zeroth order), too. Yet, this is another discussion and maybe a taste of the library writer.
dot between Poly objects needs to be defined,
b'*A*b above should be re-considered (b' will have conjugated variables, whereas A*b is unconjugated).
I hope what I have listed above would make sense for you, and I hope I had not consumed a lot of your time.
I can't explain the poor accuracy in the following computation.
using Polynomials
r = [1.0e-6;1.0e6];
p = poly(r);
( sort(roots(p)) - r ) ./ r
The relative error for the smaller root is about 7.6e-6. Both MATLAB and the PolynomialRoots package seem to produce perfect answers. The roots are well conditioned.
My version info:
Julia Version 0.4.6
Commit 2e358ce (2016-06-19 17:16 UTC)
Platform Info:
System: Darwin (x86_64-apple-darwin13.4.0)
CPU: Intel(R) Core(TM) i5-2467M CPU @ 1.60GHz
WORD_SIZE: 64
uname: Darwin 14.5.0 Darwin Kernel Version 14.5.0: Mon Aug 29 21:14:16 PDT 2016; root:xnu-2782.50.6~1/RELEASE_X86_64 x86_64 i386
Memory: 4.0 GB (16.30078125 MB free)
Uptime: 426448.0 sec
Load Avg: 2.26318359375 1.94970703125 1.9189453125
Intel(R) Core(TM) i5-2467M CPU @ 1.60GHz:
speed user nice sys idle irq
#1 1600 MHz 11458 s 0 s 9041 s 125561 s 0 s
#2 1600 MHz 3482 s 0 s 1632 s 140905 s 0 s
#3 1600 MHz 10466 s 0 s 5557 s 129996 s 0 s
#4 1600 MHz 3557 s 0 s 1684 s 140777 s 0 s
BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Sandybridge)
LAPACK: libopenblas64_
LIBM: libopenlibm
LLVM: libLLVM-3.3
Environment:
ATOM_HOME = /Users/Toby/.atom
TERM = xterm-256color
PATH = /Applications/Julia-0.4.6.app/Contents/Resources/julia/bin:/Applications/Julia-0.4.6.app/Contents/Resources/julia/libexec/git-core:/Users/Toby/anaconda2/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/opt/X11/bin:/usr/local/git/bin:/Users/Toby/Dropbox/conf:/Users/Toby/Dropbox/research/heart/StpToXml_6.1
XPC_FLAGS = 0x0
HOME = /Users/Toby
TEXMFHOME = /Users/Toby/Dropbox/texmf
NODE_PATH = /Applications/Atom.app/Contents/Resources/app.asar/exports
FONTCONFIG_PATH = /Applications/Julia-0.4.6.app/Contents/Resources/julia/etc/fonts
GIT_EXEC_PATH = /Applications/Julia-0.4.6.app/Contents/Resources/julia/libexec/git-core
Package Directory: /Users/Toby/.julia/v0.4
6 required packages:
- Atom 0.4.5
- IJulia 1.3.2
- Plots 0.8.2
- PolynomialRoots 0.0.4
- Polynomials 0.1.0
- PyPlot 2.2.4
39 additional packages:
- BinDeps 0.4.5
- Blink 0.3.5
- CodeTools 0.3.1
- Codecs 0.2.0
- ColorTypes 0.2.7
- Colors 0.6.8
- Compat 0.9.2
- Conda 0.3.2
- FactCheck 0.4.3
- FixedPointNumbers 0.1.8
- FixedSizeArrays 0.2.3
- Hiccup 0.0.3
- Homebrew 0.4.0
- HttpCommon 0.2.6
- HttpParser 0.2.0
- HttpServer 0.1.6
- Iterators 0.1.10
- JSON 0.7.0
- JuliaParser 0.6.4
- LNR 0.0.2
- LaTeXStrings 0.2.0
- Lazy 0.11.2
- MacroTools 0.3.2
- MbedTLS 0.3.0
- Measures 0.0.3
- Media 0.2.2
- Mustache 0.1.2
- Mux 0.2.1
- Nettle 0.2.4
- PlotUtils 0.0.4
- PyCall 1.7.2
- RecipesBase 0.0.6
- Reexport 0.0.3
- Requires 0.2.2
- SHA 0.2.1
- Showoff 0.0.7
- URIParser 0.1.6
- WebSockets 0.2.1
- ZMQ 0.3.4
PackageEvaluator.jl is a script that runs nightly. It attempts to load all Julia packages and run their tests (if available) on both the stable version of Julia (0.2) and the nightly build of the unstable version (0.3). The results of this script are used to generate a package listing enhanced with testing results.
On Julia 0.2
On 2014-06-19 the testing status was Tests pass.
On 2014-06-21 the testing status changed to Tests fail, but package loads.
Tests pass. means that PackageEvaluator found the tests for your package, executed them, and they all passed.
Tests fail, but package loads. means that PackageEvaluator found the tests for your package, executed them, and they didn't pass. However, trying to load your package with using worked.
This issue was filed because your testing status became worse. No additional issues will be filed if your package remains in this state, and no issue will be filed if it improves. If you'd like to opt-out of these status-change messages, reply to this message saying you'd like to and @IainNZ will add an exception. If you'd like to discuss PackageEvaluator.jl please file an issue at the repository. For example, your package may be untestable on the test machine due to a dependency - an exception can be added.
Test log:
INFO: Cloning cache of Polynomials from git://github.com/Keno/Polynomials.jl.git
INFO: Installing Polynomials v0.0.2
INFO: REQUIRE updated.
ERROR: wrong number of arguments
in include at boot.jl:238
at /home/idunning/pkgtest/.julia/v0.2/Polynomials/test/tests.jl:70
INFO: REQUIRE updated.
It seems this package would be a good fit for the JuliaMath organization, and @jverzani, the top contributor here, is also an active contributor in JuliaMath. Keno admittedly doesn't watch this package all that closely these days, and has expressed willingness to make the move should John deem it a good idea. In that case I can give Keno the necessary JuliaMath permissions to make the move.
Hello,
during some work I found the results of polyfit unnecessarily inaccurate.
I have found out, that lsq system matrix is constructed via x.^n (which supposedly walks through log/exp pow), and also the A\y inversion did not work to expectations, compared to other numerical tools.
So I wrote better version of polyfit, see attached patch. Also I attach one test data file, try x=readdlm("polyfit_8th_order_test.dat"); p=polyfit(x[:,1], x[:,2], 8); plot(x[:,2]-p(x[:,1])
-- a huge difference between current (meaningless result) and patched version. (And yes, the 8th order is physical in this data set ;-))
julia> Pkg.test("Polynomials")
INFO: Testing Polynomials
Test for the exponential function.
Test for the sine function.
Test for the cosine function.
Test for the summation of a factorially divergent series.
The approximate sum of the divergent series is: 0.596347366095526
The approximate sum of the convergent series is: 0.5963473623231946
Test for setindex!()
Test for element-wise operations
Test Failed
Expression: isa(psum,Vector{Poly{Float64}})
ERROR: LoadError: There was an error during testing
in record(::Base.Test.FallbackTestSet, ::Base.Test.Fail) at ./test.jl:397
in do_test(::Base.Test.Returned, ::Expr) at ./test.jl:281
in include_from_node1(::String) at ./loading.jl:426
in process_options(::Base.JLOptions) at ./client.jl:262
in _start() at ./client.jl:318
while loading /Users/ranjan/.julia/v0.5/Polynomials/test/runtests.jl, in expression starting on line 176
===========================================================[ ERROR: Polynomials ]============================================================
failed process: Process(`/Users/ranjan/julia/usr/bin/julia -Cnative -J/Users/ranjan/julia/usr/lib/julia/sys.dylib --compile=yes --depwarn=yes --check-bounds=yes --code-coverage=none --color=yes --compilecache=yes /Users/ranjan/.julia/v0.5/Polynomials/test/runtests.jl`, ProcessExited(1)) [1]
=============================================================================================================================================
ERROR: Polynomials had test errors
in #test#61(::Bool, ::Function, ::Array{AbstractString,1}) at ./pkg/entry.jl:740
in (::Base.Pkg.Entry.#kw##test)(::Array{Any,1}, ::Base.Pkg.Entry.#test, ::Array{AbstractString,1}) at ./<missing>:0
in (::Base.Pkg.Dir.##2#3{Array{Any,1},Base.Pkg.Entry.#test,Tuple{Array{AbstractString,1}}})() at ./pkg/dir.jl:31
in cd(::Base.Pkg.Dir.##2#3{Array{Any,1},Base.Pkg.Entry.#test,Tuple{Array{AbstractString,1}}}, ::String) at ./file.jl:59
in #cd#1(::Array{Any,1}, ::Function, ::Function, ::Array{AbstractString,1}, ::Vararg{Array{AbstractString,1},N}) at ./pkg/dir.jl:31
in (::Base.Pkg.Dir.#kw##cd)(::Array{Any,1}, ::Base.Pkg.Dir.#cd, ::Function, ::Array{AbstractString,1}, ::Vararg{Array{AbstractString,1},N}) at ./<missing>:0
in #test#3(::Bool, ::Function, ::String, ::Vararg{String,N}) at ./pkg/pkg.jl:258
in test(::String, ::Vararg{String,N}) at ./pkg/pkg.jl:258
I'd like to see a polyfit function somewhere in the julia ecosystem, is this the right place?
Something like
julia> function polyfit(x, y, n)
A = [ float(x[i])^p for i = 1:length(x), p = 0:n ]
Poly(A \ y)
end
polyfit (generic function with 1 method)
julia> x = [0:10];
julia> y = x.^2-15*x+50;
julia> p=polyfit(x,y,2)
Poly(49.99999999999998 - 14.999999999999991x + 0.9999999999999991x^2)
The roots function is not type stable, since it can return either Array{Complex{Float64},1} or Array{Float64,1}, depending only on the polynomial coefficients.
It would seem like a good idea to always return Array{Complex{Float64},1}.
Looking at the current show, I think it does an amazing job.
Do you think we could extend show to have
support for printing the polynomial in the reverse order, and/or,
printing the polynomial in its factored form?
Motivation
We are working on a control systems toolbox, and we are requiring Polynomials in that toolbox. Generally, people in the control community prefer seeing the polynomials
in descending exponents, if the system model is represented by transfer functions, or,
in factored form such as (s+r1)(s+r2) for real roots (with corresponding multiplicities), or, (s^2+as+b) for complex conjugate roots, if the system is represented by zero-pole-gain models.
As a side note, in transfer function models, we are using a subset of the polynomials: polynomials with real coefficients. As a result, we can also have an efficient search strategy for Poly{T<:Real}s when finding the complex conjugate pairs.
Some Suggestions
We can have this behaviour (if that would be implemented in Polynomials), by
either proper method overloads, or,
maybe through a global setting in the package (maybe a const Dict{Symbol,Any} with a setting symbol pointing to a value type such as Show{:Factored}, Show{:Descending} and/or Show{:Ascending} to dispatch the corresponding print method for show, for example.
There seems to be a problem with elevating z to the zeroth exponent.
julia> let z = Poly([0.0, 1]),
z1 = z^3 + z^0;
length(z1), z1[end]
end
(5,0.0)
So, z1 has length 5, which does not seems correct. I would have expected a length 3 underlying array, corresponding to a degree 3 polynomial. Here I am trying to access the coefficient of greatest degree for the polynomial. Is there a degree function? A related problem is that 0*z^5 is of length 6 and thus the length does not give the degree.
The question is whether it should be a separate package or would it make sense to have the code here? I don't think it warrants a separate package and having to hunt through different packages to find the necessary function is a headache. It would be nice if you could just do something like
using Polynomials
p =hermite(3)
If seems reasonable to have them here then I would love to make some PRs at some point.
I've just started using Polynomials. Readme.md says that Poly(::Vector) constructs a polynomial lowest order first. Experimenting, I see highest order first. Am I missing something? Happy to edit the Readme for you in a PR if needed.
Sometimes multiplying a polynomial p with another one, p_, may not yield the expected result. Here is a printout of a loop where a polynomial Pj is updated in every iteration. The new Pj does not always reflect the update:
2 -4.047588495693385e-21
2 -154.8971429 + x
2 6.269598936178143e-19
3 6.269598936178143e-19
3 -211.0 + x
3 -1.3228853755335882e-16
4 -1.3228853755335882e-16
4 -230.1282051 + x
4 3.044332370245841e-14
5 3.044332370245841e-14
5 -242.552795 + x
5 -7.384113253121036e-12 + 3.044332370245841e-14x
6 -7.384113253121036e-12 + 3.044332370245841e-14x
6 -257.96875 + x
6 1.904870465766067e-9 - 1.5237539414489605e-11x + 3.044332370245841e-14x^2
Here is the source code for this (c_ = some scalar value, x is a vector of data):
Pj =Poly([c_])
for i in not_j
println(i, "", Pj)
Pj *=Poly([-x[i], 1])
println(i, "", Poly([-x[i], 1]))
println(i, "", Pj)
end
dot{T<:Number,S}(p::Poly{S}, c::T) = p * c
dot{T<:Number,S}(c::T, p::Poly{S}) = c * p
dot(p1::Poly, p2::Poly) = p1 * p2
which make no sense to me.
dot should define an inner product, which means that it should return a number (or whatever the coefficient type of the polynomial is). For polynomials, you would normally define the dot product to be some kind of integral. But since there are many such definitions and no single canonical choice, it would be better to leave this undefined.
Seriously, @aytekinar and I were thinking to try to implement support for sparse coefficients. In our forks we will make a branch and incrementally try to implement this.
Would it be OK if we removed Base./(p1::Poly, p2::Poly) overload from the package?
Over the weekend I have tried implementing RationalFunctions.jl in the hope for extending Polynomials to support rational functions.
In that package, I am redefining the division operator between Polys. However, Julia issues a warning when doing so, which is not fun. It has been, anyway, deprecated for a while now in Polynomials.
julia>using Polynomials
julia> A =Matrix{Poly{Number}}(2,1);
julia>eltype(eltype(A))
Any
julia> B =Matrix{Poly{Float64}}(2,1);
julia>eltype(eltype(B))
Any
Desired behaviour
julia>using Polynomials
julia> A =Matrix{Poly{Number}}(2,1);
julia>eltype(eltype(A))
Number
julia> B =Matrix{Poly{Float64}}(2,1);
julia>eltype(eltype(B))
Float64
# determine the last nonzero element and truncate a accordingly
a_last = max(1,findlast( p->(abs(p) > 2*eps(T)), a))
new(a[1:a_last], symbol(var))
That has a surprising consequence:
All higher coefficients are truncated, if their absolute value is below eps(). That generates an underflow-behaviour at high values. It would be easy to find p=Poly(..); p / 1000 * 1000 != p. There should be no such limitation, which restricts usability, for example when working with interpolating polynomials of higher degrees.
I recommend to replace abs(p) > eps(T) by p != zero(T)
gets truncated to p=truncate(p)=-2.510840694154672e57∙x so the roots(p) returns just one root 0.0.
This is wrong and meaningless. When the line p = truncate(p) is removed from the roots() function, roots(p) returns correct roots
1.62259e31
1.75922e13
8.79609e12
0.0
So, forcing truncate() is counterproductive. Just leave the polynomial as is and let the eigvals() routine do its job.
I have come across such an attitude in users when using the control toolbox in MATLAB. In our attempt to port such functionality in Julia, we are relying on Polynomials. Hence, I thought I could open an issue here and discuss with you.
Solution Proposal
Should you be interested in supporting such a feature, please check the PR.
P.S: I have also got rid of the clutter in the constructor. I hope you would like it.