juliamath / fixedpointnumbers.jl Goto Github PK
View Code? Open in Web Editor NEWfixed point types for julia
License: Other
fixed point types for julia
License: Other
The definition of realmin
seems to be strange for fixed point numbers. The observed behavior is:
julia> realmin(Q11f4)
-2048.0Q11f4
But the expected behavior is:
julia> realmin(Q11f4)
0.06Q11f4
as per docstring,
help?> realmin
search: realmin realmax readdlm ReadOnlyMemoryError
realmin(T)
The smallest in absolute value non-subnormal value representable by the given floating-point
DataType T.
(maybe this function is redundant with eps
and need not be defined?)
To check the modification for the issue #129, I ran the tests on a 32-bit ARMv7 system (RPi 2 Model B v1.2). And then, I faced a problem with rem
(%
).
modulus: Test Failed at ~/.julia/dev/FixedPointNumbers/test/normed.jl:148
Expression: (-0.3 % N0f8).i == round(Int, -0.3 * 255) % UInt8
Evaluated: 0x00 == 0xb4
modulus: Test Failed at ~/.julia/dev/FixedPointNumbers/test/normed.jl:154
Expression: (-0.3 % N6f10).i == round(Int, -0.3 * 1023) % UInt16
Evaluated: 0x0000 == 0xfecd
The cause is the behavior of unsafe_trunc
.
FixedPointNumbers.jl/src/normed.jl
Line 103 in 70ae1d6
FixedPointNumbers.jl/src/normed.jl
Lines 204 to 205 in 70ae1d6
julia> versioninfo()
Julia Version 1.0.3
Platform Info:
OS: Linux (arm-linux-gnueabihf)
CPU: ARMv7 Processor rev 4 (v7l)
WORD_SIZE: 32
LIBM: libopenlibm
LLVM: libLLVM-6.0.0 (ORCJIT, cortex-a53)
julia> unsafe_trunc(UInt8, -76.0) # or the intrinsic `fptoui`
0x00
julia> unsafe_trunc(Int8, -76.0)
-76
julia> unsafe_trunc(UInt8, unsafe_trunc(Int8, -76.0))
0xb4
(The problem occurs not only on v1.0.3 but also on v1.0.5 and v1.2.0. I have not tried the 64-bit.)
Although the behavior of unsafe_trunc
may not be what we want, this is not a bug.
If the value is not representable by
T
, an arbitrary value will be returned.
https://docs.julialang.org/en/v1/base/math/#Base.unsafe_trunc
However, I don't think it is good to make the rem
users aware of the internal unsafe_trunc
.
The workaround is to convert the value to Signed
temporarily as shown above.
BTW, the behavior of Normed
's rem
, which is specified by the above tests seems to be not intuitive. (Since I know the inside of Normed
, I think the behavior is reasonable, though.)
So, I think it is another option to eliminate the tests for negative float inputs, i.e. make it an undefined behavior.
While I was reviewing the PR #123, I found a minor bug.
julia> float(3N3f5) # this is ok
3.0f0
julia> float(3N4f12)
3.0000002f0
julia> float(3N8f8)
3.0000002f0
julia> float(3N10f6)
3.0000002f0
julia> float(3N12f4)
3.0000002f0
julia> float(3N13f3)
3.0000002f0
julia> float(3N2f6)
3.0000002f0
julia> float(3N4f4)
3.0000002f0
julia> float(3N5f3)
3.0000002f0
I think this is a problem with the division optimization.
FixedPointNumbers.jl/src/normed.jl
Lines 75 to 78 in da39318
The rounding errors can occur everywhere, but especially the errors at integer values are critical.
julia> isinteger(float(3N5f3))
false
The "fixed" testset depends on testapprox
which is defined in "test/normed.jl" (formerly named ufixed.jl).
FixedPointNumbers.jl/test/fixed.jl
Lines 86 to 90 in 70ae1d6
FixedPointNumbers.jl/test/normed.jl
Lines 233 to 240 in 70ae1d6
This thwarts the selective execution of tests and causes an error(UndefVarError: testapprox not defined
) in certain environments (e.g. Julia v1.0.3 32-bit ARMv7).
I think testapprox
is not so complicated. It may be a good idea to inline the function or define it locally.
This may be off topic, but I also think there are too many test cases generated by testapprox
(or its friend testtrunc
). @test
should be outside the loop.
Related to my testing in #103, but this seems more straightforward, so I figured it might be cleaner to separate it into its own issue.
julia> typemax(Fixed{Int8, 7})
0.992Q0f7
julia> typemax(Fixed{Int16, 15})
0.99997Q0f15
julia> typemax(Fixed{Int32, 31})
-0.9999999995Q0f31
julia> typemax(Fixed{Int64, 63})
InfQ0f63
julia> Fixed{Int32, 31}(0.2)
-1.7999999998Q0f31
julia> versioninfo()
Julia Version 0.6.2
Commit d386e40c17 (2017-12-13 18:08 UTC)
Platform Info:
OS: Linux (i686-pc-linux-gnu)
CPU: Intel(R) Core(TM) i7-7500U CPU @ 2.70GHz
WORD_SIZE: 32
BLAS: libopenblas (DYNAMIC_ARCH NO_AFFINITY Prescott)
LAPACK: libopenblas
LIBM: libopenlibm
LLVM: libLLVM-3.9.1 (ORCJIT, broadwell)
I am seeing this in JuliaBox. julia-0.4-rc2
Doing a using FixedPointNumber
throws:
INFO: Recompiling stale cache file /home/juser/.julia/lib/v0.4/FixedPointNumbers.ji for module FixedPointNumbers.
WARNING: Module Compat uuid did not match cache file
WARNING: deserialization checks failed while attempting to load cache from /home/juser/.julia/lib/v0.4/FixedPointNumbers.ji
INFO: Precompiling module FixedPointNumbers...
INFO: Recompiling stale cache file /home/juser/.julia/lib/v0.4/FixedPointNumbers.ji for module FixedPointNumbers.
WARNING: Module Compat uuid did not match cache file
LoadError: __precompile__(true) but require failed to create a precompiled cache file
while loading In[9], in expression starting on line 2
in require at ./loading.jl:252
in stale_cachefile at loading.jl:439
in recompile_stale at loading.jl:457
in _require_from_serialized at loading.jl:83
in _require_from_serialized at ./loading.jl:109
in require at ./loading.jl:219
in stale_cachefile at loading.jl:439
in recompile_stale at loading.jl:457
in _require_from_serialized at loading.jl:83
in _require_from_serialized at ./loading.jl:109
in require at ./loading.jl:219
Doing a using Compat
succeeds. Pkg.status
is as follows:
1 required packages:
- ProfileView 0.1.1
14 additional packages:
- BinDeps 0.3.17
- Cairo 0.2.31
- ColorTypes 0.1.6
- Colors 0.5.4
- Compat 0.7.3
- Docile 0.5.19
- FixedPointNumbers 0.0.11
- Graphics 0.1.3
- Gtk 0.9.2
- GtkUtilities 0.0.5
- HttpCommon 0.2.4
- Reexport 0.0.3
- SHA 0.1.2
- URIParser 0.1.0
This might belong to Julia, since it seems to be related to parsing.
julia> 0x80uf8^2
ERROR: `*` has no method matching *(::UfixedConstructor{Uint8,8}, ::UfixedConstructor{Uint8,8})
in power_by_squaring at intfuncs.jl:56
in ^ at intfuncs.jl:86
julia> (0x80uf8)^2
0.25196466f0
For comparison:
julia> 1.23f0^2
1.5129f0
x=rand(Float32,100,100)
julia> convert(Ufixed16,x)
ERROR: convert
has no method matching convert(::Type{UfixedBase{Uint16,16}}, ::Array{Float32,2})
in convert at base.jl:13
However:
julia> convert(Ufixed16,x[1,1])
Ufixed16(0.31148)
I am new to Julia, so may I'm just doing something wrong?
For use in Color/Images, I'm considering implementing a set of types that I was thinking of calling Fixed8
, Fixed10
, Fixed12
, Fixed14
, and Fixed16
. The idea behind these is to allow Uint8
or Uint16
s to be used to represent image pixels, but have Fixed8(0xff) == 1.0
evaluate as true
. (The 10, 12, 14, and 16 versions would be used for images collected with 10, 12, 14, and 16-bit cameras, but all would use an underlying Uint16
data type.) That would make sure that the numeric/bitwise representation was decoupled from the meaning of "white" in an RGB color sense. See discussion in JuliaAttic/Color.jl#42.
I'm wondering if this functionality should go into this package, or whether it's incompatible with your goals and hence should be a different package.
As discussed in https://github.com/JuliaLang/julia/issues/15690 with @timholy I've been recently bitten by the inconsistency regarding overflow in FixedPoint arithmetic when dealing with Images.jl.
FixedPointNumbers are pretending to be <:Real
numbers, but their behavior regarding arithmetic is more like Integer
. I just cannot think of any use-case one would get any advantage from the "modulo" arithmetic when dealing with the numbers guaranteed to fall within <0, 1> range. I'd be glad if my code stopped by an OverflowError exception indicating problem in my algorithm. Thus, easy fixable by manual widening of the operands. With the current silent overflow, algorithm just continues, giving finally wrong results.
Before writing a PR to introduce arithmetic that throws on overflow or underflow, I'd like to know your opinion on this change. I was also thinking of using a global "flag" to dynamically opt-in for the overflow-checking behavior but i'm worried about the branching costs. Is it possible e.g. use traits for this behavior change?
As I suggested here, the current Normed
-> Normed
conversions are inefficient in some cases.
FixedPointNumbers.jl/src/normed.jl
Lines 41 to 46 in 8d17739
The current conversion method has two problems:
The former means that the method is not SIMD-suitable.
Regarding the latter, the conversion between types with the same f
is already specialized.
FixedPointNumbers.jl/src/normed.jl
Line 13 in ee5bd54
N0f8
->N0f16
specialization. (I wonder why.)FixedPointNumbers.jl/src/normed.jl
Line 47 in ee5bd54
I do not think these are urgent problems. However, the optimization may be useful in the future to speed up the accumulation (reduce
). And I just found a (ugly) workaround for the constant division problem, so I am writing this issue as a memorandum or reminder.
The figures below visualize the cases where the optimization is available.
f1 == f2
lines are already supported.You can get the result of other cases with the following script:
using Gadfly, Colors
set_default_plot_size(15cm, 8cm)
function mat(dest, src)
b1, b2 = 8*sizeof(dest), 8*sizeof(src)
if b1 > b2 # widening
safe = [b1-f1 > b2-f2 || b2 == f2 ? 1 : -1 for f2=1:b2, f1=1:b1]
else
safe = [b1-f1 >= b2-f2 ? 1 : -1 for f2=1:b2, f1=1:b1]
end
safe .* [isinteger(f1/f2) ? f2/f1 : 1/36 for f2=1:b2, f1=1:b1]
end;
function plot_mat(dest, src)
s = Scale.color_continuous(
colormap=Scale.lab_gradient(LCHab(0, 100, 20), "white", LCHab(30, 100, 200)),
minvalue=-1, maxvalue=1)
m = mat(dest, src)
spy(m,
Guide.title("Normed{$src,f2} -> Normed{$dest,f1}"),
Guide.xlabel("f1"), Guide.xticks(ticks=axes(m, 2)),
Guide.ylabel("f2"), Guide.yticks(ticks=axes(m, 1)),
Guide.colorkey(title="scale"), s,
Theme(plot_padding=[0mm,2mm,5mm,0mm]))
end;
plot_mat(UInt16, UInt8)
plot_mat(UInt32, UInt8)
plot_mat(UInt32, UInt16)
Edit: The safe areas are wrong. I forgot to take account of the "carry" or "overlapping". I will soon fix the figures and the script above. Updated.
The code currently assumes that the denominator is 2^f
, which is not the case for Normed
. e.g.
julia> isinteger(Normed{UInt8,7}(1))
false
I believe you need to change
isinteger(x::FixedPoint{T,f}) where {T,f} = (x.i&(1<<f-1)) == 0
to
isinteger(x::Fixed{T,f}) where {T,f} = (x.i&(1<<f-1)) == 0
isinteger(x::Normed{T,f}) where {T,f} = (x.i%(1<<f-1)) == 0
_
_ _ _(_)_ | A fresh approach to technical computing
(_) | (_) (_) | Documentation: https://docs.julialang.org
_ _ _| |_ __ _ | Type "?help" for help.
| | | | | | |/ _` | |
| | |_| | | | (_| | | Version 0.7.0-DEV.3266 (2018-01-04 16:44 UTC)
_/ |\__'_|_|_|\__'_| | Commit 93454e2* (0 days old master)
|__/ | x86_64-linux-gnu
julia> Pkg.test("FixedPointNumbers.jl")
[ Info: Testing FixedPointNumbers @ Base.Pkg.Entry entry.jl:723
ERROR: LoadError: UndefVarError: promote_sys_size not defined
Stacktrace:
[1] getproperty(::Module, ::Symbol) at ./sysimg.jl:14
[2] top-level scope at /home/lobi/.julia/v0.7/FixedPointNumbers/src/FixedPointNumbers.jl:140
[3] include at ./boot.jl:295 [inlined]
[4] include_relative(::Module, ::String) at ./loading.jl:521
[5] include(::Module, ::String) at ./sysimg.jl:26
[6] top-level scope
[7] eval at ./boot.jl:298 [inlined]
[8] top-level scope at ./<missing>:2
in expression starting at /home/lobi/.julia/v0.7/FixedPointNumbers/src/FixedPointNumbers.jl:140
ERROR: LoadError: Failed to precompile FixedPointNumbers to /home/lobi/.julia/lib/v0.7/FixedPointNumbers.ji.
Stacktrace:
[1] error at ./error.jl:33 [inlined]
[2] compilecache(::String) at ./loading.jl:648
[3] compilecache at ./loading.jl:605 [inlined]
[4] _require(::Symbol) at ./loading.jl:460
[5] require(::Symbol) at ./loading.jl:333
[6] include at ./boot.jl:295 [inlined]
[7] include_relative(::Module, ::String) at ./loading.jl:521
[8] include(::Module, ::String) at ./sysimg.jl:26
[9] process_options(::Base.JLOptions) at ./client.jl:323
[10] _start() at ./client.jl:374
in expression starting at /home/lobi/.julia/v0.7/FixedPointNumbers/test/runtests.jl:1
┌ Error: ------------------------------------------------------------
│ # Testing failed for FixedPointNumbers
│ exception = ErrorException("failed process: Process(`/home/lobi/julia07/usr/bin/julia -Cnative -J/home/lobi/julia07/usr/lib/julia/sys.so --compile=yes --depwarn=yes --code-coverage=none --color=yes --compiled-modules=yes --check-bounds=yes --warn-overwrite=yes --startup-file=yes /home/lobi/.julia/v0.7/FixedPointNumbers/test/runtests.jl`, ProcessExited(1)) [1]")
└ @ Base.Pkg.Entry entry.jl:739
ERROR: FixedPointNumbers had test errors
The latest tagged version (i.e. v0.3.0) generates zillions of deprecated warnings. I think you already notice that and a new version will be released sooner or later, but just in case.
Currently if you have a UInt16
that you want to reinterpret as a signed fixed-point number the reinterpret
method here doesn't apply, so you get an error:
julia> reinterpret(Fixed{Int16, 15}, 0x8123)
ERROR: bitcast: target type not a leaf primitive type
Stacktrace:
[1] reinterpret(::Type{FixedPointNumbers.Fixed{Int16,15}}, ::UInt16) at ./essentials.jl:155
[2] eval(::Module, ::Any) at ./boot.jl:235
Is it worth having another reinterpret
method to handle this, or is it safer to have the user double-reinterpret? (e.g. reinterpret(Fixed{Int16, 15}, reinterpret(Int16, 0x8123))
)
One proposed method would be:
function Base.reinterpret(::Type{Fixed{T,f}}, x::Unsigned) where {T <: Signed,f}
reinterpret(Fixed{T, f}, reinterpret(T, x))
end
Or we could even drop the requirement the x
subtypes Unsigned
if we want to reinterpret more broadly.
#55 was closed in sept 2016, however i see the same with
_
_ _ _(_)_ | A fresh approach to technical computing
(_) | (_) (_) | Documentation: http://docs.julialang.org
_ _ _| |_ __ _ | Type "?help" for help.
| | | | | | |/ _` | |
| | |_| | | | (_| | | Version 0.4.5 (2016-03-18 00:58 UTC)
_/ |\__'_|_|_|\__'_| |
|__/ | x86_64-linux-gnu
julia> Pkg.init()
INFO: Initializing package repository /home/lobi/.julia/v0.4
INFO: Cloning METADATA from git://github.com/JuliaLang/METADATA.jl
julia> Pkg.add("Cairo")
INFO: Installing BinDeps v0.4.5
INFO: Installing Cairo v0.2.35
INFO: Installing ColorTypes v0.2.12
INFO: Installing Colors v0.6.9
INFO: Installing Compat v0.19.0
INFO: Installing FixedPointNumbers v0.2.1
INFO: Installing Graphics v0.1.3
INFO: Installing Reexport v0.0.3
INFO: Installing SHA v0.3.1
INFO: Installing URIParser v0.1.8
INFO: Building Cairo
INFO: Recompiling stale cache file /home/lobi/.julia/lib/v0.4/BinDeps.ji for module BinDeps.
INFO: Recompiling stale cache file /home/lobi/.julia/lib/v0.4/Compat.ji for module Compat.
INFO: Recompiling stale cache file /home/lobi/.julia/lib/v0.4/URIParser.ji for module URIParser.
INFO: Recompiling stale cache file /home/lobi/.julia/lib/v0.4/SHA.ji for module SHA.
INFO: Package database updated
julia> Pkg.test("Cairo")
INFO: Testing Cairo
INFO: Recompiling stale cache file /home/lobi/.julia/lib/v0.4/Cairo.ji for module Cairo.
INFO: Recompiling stale cache file /home/lobi/.julia/lib/v0.4/Colors.ji for module Colors.
INFO: Recompiling stale cache file /home/lobi/.julia/lib/v0.4/FixedPointNumbers.ji for module FixedPointNumbers.
WARNING: New definition
floattype(Type{#T<:FixedPointNumbers.Fixed}) at /home/lobi/.julia/v0.4/FixedPointNumbers/src/fixed.jl:16
is ambiguous with:
floattype(Type{FixedPointNumbers.FixedPoint{#T<:Union{Int8, UInt16, Int16, UInt8}, #f<:Any}}) at /home/lobi/.julia/v0.4/FixedPointNumbers/src/FixedPointNumbers.jl:89.
To fix, define
floattype(Type{FixedPointNumbers.Fixed{_<:Union{Int8, Int16}, #f<:Any}})
before the new definition.
WARNING: New definition
floattype(Type{#T<:FixedPointNumbers.UFixed}) at /home/lobi/.julia/v0.4/FixedPointNumbers/src/ufixed.jl:14
is ambiguous with:
floattype(Type{FixedPointNumbers.FixedPoint{#T<:Union{Int8, UInt16, Int16, UInt8}, #f<:Any}}) at /home/lobi/.julia/v0.4/FixedPointNumbers/src/FixedPointNumbers.jl:89.
To fix, define
floattype(Type{FixedPointNumbers.UFixed{_<:Union{UInt16, UInt8}, #f<:Any}})
before the new definition.
INFO: Recompiling stale cache file /home/lobi/.julia/lib/v0.4/ColorTypes.ji for module ColorTypes.
INFO: Recompiling stale cache file /home/lobi/.julia/lib/v0.4/Reexport.ji for module Reexport.
INFO: Recompiling stale cache file /home/lobi/.julia/lib/v0.4/Graphics.ji for module Graphics.
INFO: Cairo tests passed
julia> Pkg.status()
1 required packages:
- Cairo 0.2.35
9 additional packages:
- BinDeps 0.4.5
- ColorTypes 0.2.12
- Colors 0.6.9
- Compat 0.19.0
- FixedPointNumbers 0.2.1
- Graphics 0.1.3
- Reexport 0.0.3
- SHA 0.3.1
- URIParser 0.1.8
_ _ _(_)_ | A fresh approach to technical computing
(_) | (_) (_) | Documentation: http://docs.julialang.org
_ _ _| |_ __ _ | Type "help()" for help.
| | | | | | |/ _` | |
| | |_| | | | (_| | | Version 0.4.0-dev+5791 (2015-07-04 23:24 UTC)
_/ |\__'_|_|_|\__'_| | Commit 59a1e9c* (3 days old master)
|__/ | x86_64-linux-gnu
julia> using FixedPointNumbers
WARNING: deprecated syntax "< {" at /home/lobi/.julia/v0.4/FixedPointNumbers/src/fixed32.jl:20.
Use "<{" instead.
WARNING: deprecated syntax "< {" at /home/lobi/.julia/v0.4/FixedPointNumbers/src/ufixed.jl:91.
Use "<{" instead.
this works in Julia 0.6.2:
Float32(1.0) |> N0f8
Float32(1.0) |> N0f16
this will not work
Float32(1.0) |> N0f32
Float32(1.0) |> N0f64
Float64(1.0) |> N0f64
It looks like when the number of bits is the same or larger, it won’t work.
error message:
ArgumentError: FixedPointNumbers.Normed{UInt64,64} is a 64-bit type representing 0 values from 0.0 to 1.0; cannot represent 1.0
FixedPointNumbers.floattype
seems inconsistent to Base.float
julia> float(Int8)
Float64
julia> floattype(Int8)
Float32
Should we update accordingly?
Any interest in moving this package to JuliaMath? Orgs have pluses and minuses, but on balance they seem to be recommended for key pieces of infrastructure (which this definitely is).
julia> 2 * 0.5Q1f14
ERROR: InexactError()
Stacktrace:
[1] convert at /home/fengyang/.julia/v0.6/FixedPointNumbers/src/fixed.jl:42 [inlined]
[2] promote at ./promotion.jl:174 [inlined]
[3] *(::Int64, ::FixedPointNumbers.Fixed{Int16,14}) at ./promotion.jl:247
This is happening because 2
is not representable in Q1f14, even though the result 1
would be. I think it is avoidable by specializing ::Integer * ::FixedPoint
(and ::FixedPoint * ::Integer
?) directly, but I don't have much experience with this. If this is the right thing to do, I'll make a PR.
Fixed
and Normed
have evolved independently. Therefore, some functions are specialized even though they do not need specialization.
cf. ee5bd54...kimikage:commonize PR #151
I'm going to refactor the codes by taking the following 3 steps.
Fixed
)Although "test-first" is a good practice, the step 3. requires major renovations. Please give me your advice if you have any ideas.
WARNING: New definition
convert(Type{FixedPointNumbers.UFixed{#T<:Any, #f<:Any}}, FixedPointNumbers.UFixed) at /home/travis/.julia/v0.4/FixedPointNumbers/src/ufixed.jl:46
is ambiguous with:
convert(Type{FixedPointNumbers.UFixed{#T1<:Any, #f<:Any}}, FixedPointNumbers.UFixed{#T2<:Any, #f<:Any}) at /home/travis/.julia/v0.4/FixedPointNumbers/src/ufixed.jl:44.
To fix, define
convert(Type{FixedPointNumbers.UFixed{#T<:Any, #f<:Any}}, FixedPointNumbers.UFixed{T<:Unsigned, #f<:Any})
before the new definition.
WARNING: New definition
convert(Type{FixedPointNumbers.UFixed{#T<:Any, #f<:Any}}, FixedPointNumbers.UFixed) at /home/travis/.julia/v0.4/FixedPointNumbers/src/ufixed.jl:46
is ambiguous with:
convert(Type{FixedPointNumbers.UFixed{#T1<:Any, #f<:Any}}, FixedPointNumbers.UFixed{#T2<:Any, #f<:Any}) at /home/travis/.julia/v0.4/FixedPointNumbers/src/ufixed.jl:44.
The tag name "v" is not of the appropriate SemVer form (vX.Y.Z).
cc: @timholy
In audio it's common to use signed integer samples, which in FixedPointNumbers parlance becomes Fixed{Int16, 15}
for 16-bit samples, or Fixed{Int8, 7}
for 8-bit.
For my use cases, aliases like Fixed8
, Fixed16
with the above definitions would be useful, but I see there's already a definition for Fixed16
to Fixed{Int32, 16}
, and a deprecated Fixed32
that aliases to the current Fixed16
definition.
Would these definitions also work for the use-cases currently using the Fixed16
definition, or do they need the extra storage space for values outside [-1, 1)?
These are mentioned in the README, but don't appear to be there?
julia> using FixedPointNumbers
julia> Normed8
UndefVarError: Normed8 not defined
julia> Normed{UInt8, 8}
FixedPointNumbers.Normed{UInt8,8}
julia> Ufixed8(0xff)
Ufixed8(0.004)
Isn't it supposed to be Ufixed8(1.0)?
Julia Version 0.4.0-dev+4791
Commit a9b0135 (2015-05-12 15:32 UTC)
Platform Info:
System: Linux (x86_64-linux-gnu)
CPU: Intel(R) Core(TM) i7-3520M CPU @ 2.90GHz
WORD_SIZE: 64
BLAS: libopenblas (NO_LAPACK NO_LAPACKE DYNAMIC_ARCH NO_AFFINITY Sandybridge)
LAPACK: liblapack.so.3
LIBM: libopenlibm
LLVM: libLLVM-3.3
julia runtests.jl
completed without any errors though..?
A recent PR in Julia changed the capitalization of the unsigned integers, so that they are written UInt
instead of Uint
. That naming convention might apply here also.
In JuliaGraphics/ColorTypes.jl#11 the need for FixedPoint numbers with a different bit size from 32bit arises.
Ufixed is very flexible and nice for representing ColorValues, but it seems to me that its design is a bit different. Ufixed is defined from [0,1] and Fixed32 has a variable range depending on the number of fraction bits.
For my needs I wouldn't need a variable fraction bit number as all values are within [-1, 1]
I was surprised to find that converting an integer-based normed value to Float64
is not equivalent to dividing out the integer:
julia> Float64(reinterpret(Normed{UInt16, 16}, UInt16(100))) == 100/2^16
false
Is this by design? I assumed the Float64 conversion would just do the division internally.
The current codebase has no compatibility with Julia v0.6, because the syntax for parametric methods has been fully changed to the current (i.e.where
) style. I think the backporting is not easy. Therefore, the following codes are no longer useful.
FixedPointNumbers.jl/src/FixedPointNumbers.jl
Lines 157 to 169 in 70ae1d6
Removing the obsolete promotions may help improve the promotion rules. For example, the current promotions use const Treduce = Float64
, but I don't think it is always pretty good. (I can expect the negative impact of the braking change to be significant, though.)
Moreover, it ostensibly improves the code coverage.
I am working on improving the accuracy of the conversions from Normed
to Float
(#129), and I am interested in scaledual
, which seems to be related with the conversions.
The scaledual
was introduced in d1087f7.
The introduction of
scaledual
is a bit speculative, but it can essentially double the speed of certain operations. It has the following property:bd, ad = scaledual(b, a) b*a == bd*adbut the RHS might be faster (particularly for floating-point
b
and an arraya
of fixed-point numbers).
Originally posted by @timholy in #2 (comment)
However, in the current codebase, I think scaledual
does not have the property above as its test specifies. (a[1] != af8[1]
)
FixedPointNumbers.jl/test/normed.jl
Lines 287 to 294 in da39318
I do my best for #129, but a slowdown is inevitable. If scaledual
is helpful as a workaround for people who prefer speed over accuracy, I feel relieved.
@timholy , did I not understand that correctly?
Actually, this surprises me a bit, any hints/ideas on how to fix this?
A similar result can be observed for *
,.*
, and etc.
julia> x_n0f8 = rand(N0f8, 1000, 1000);
julia> x_float64 = rand(Float64, 1000, 1000);
julia> x_uint8 = rand(UInt8, 1000, 1000);
julia> @benchmark x_n0f8 .+ x_n0f8
BenchmarkTools.Trial:
memory estimate: 976.84 KiB
allocs estimate: 6
--------------
minimum time: 3.547 ms (0.00% GC)
median time: 3.732 ms (0.00% GC)
mean time: 3.838 ms (1.11% GC)
maximum time: 7.032 ms (0.00% GC)
--------------
samples: 1301
evals/sample: 1
julia> @benchmark x_float64 .+ x_float64
BenchmarkTools.Trial:
memory estimate: 7.63 MiB
allocs estimate: 4
--------------
minimum time: 1.214 ms (0.00% GC)
median time: 1.346 ms (0.00% GC)
mean time: 1.869 ms (29.19% GC)
maximum time: 5.726 ms (71.81% GC)
--------------
samples: 2667
evals/sample: 1
julia> @benchmark x_uint8 .+ x_uint8
BenchmarkTools.Trial:
memory estimate: 976.75 KiB
allocs estimate: 4
--------------
minimum time: 80.220 μs (0.00% GC)
median time: 87.781 μs (0.00% GC)
mean time: 235.548 μs (18.39% GC)
maximum time: 1.844 ms (69.85% GC)
--------------
samples: 10000
evals/sample: 1
I'm troubleshooting some 32-bit failures and seeing some very strange things. I dug down for a while and narrowed it down to floats not getting converted to Fixed-point numbers correctly, e.g. I added this to my test:
x = 0.222538
@show x
f = Fixed{Int32, 31}(x)
@show f
and got the following appveyor output on 32-bit:
x = 0.222538
f = -1.777462Q0f31
Then in trying to put together a repro to add to the FixedPointNumbers tests, I saw strange behavior on my local machine (64-bit, running Julia 0.6.2). These were entered as-written here, back-to-back, at the REPL (in Atom):
julia> typemax(Fixed{Int8, 7})
0.292Q0f7
julia> typemax(Fixed{Int8, 7})
0.992Q0f7
Same expression, different results.
I added some more tests to my code:
@testset "fractional fixed-point works" begin
for T in (Fixed{Int8, 7},
Fixed{Int16, 15},
Fixed{Int32, 31},
Fixed{Int64, 63})
tol = (typemax(T) + 1.0) / (sizeof(T) * 8)
for x in linspace(-1, float(typemax(T))-tol, 100)
@test abs(Fixed{Int16, 15}(x) - x) <= tol
end
end
end
which works on my local 64-bit machine but throws an error on the appveyor 32-bit machine:
fractional fixed-point works: Error During Test
Got an exception of type InexactError outside of a @test
InexactError()
Stacktrace:
[1] trunc at .\float.jl:651
[2] _linspace(::Float64, ::Float64, ::Int32) at .\twiceprecision.jl:349
[3] linspace(::Float64, ::Float64, ::Int32) at .\twiceprecision.jl:338
[4] linspace(::Int32, ::Float64, ::Int32) at .\range.jl:243
[5] macro expansion at C:\Users\appveyor\.julia\v0.6\SampledSignals\test\WAVDisplay.jl:104 [inlined]
...
where WAVDisplay.jl:104 is the linspace
line in the test above.
Sorry for the big dump here, I'm not sure exactly what I'm looking for, in the sense that the nondeterministic behavior makes me wonder if it's a Base
thing, not just a FixedPointNumbers
thing.
Is there a better way to test 32-bit behavior other than pushing to appveyor? Maybe a docker box?
I wonder if there might be some potential for confusion with the notion of a fixed point of a function.
i merged manually #100 into master and ran. Precompile works, but Pkg.test fails (obviously with Base.Test and Test as stdlib), but also
_
_ _ _(_)_ | A fresh approach to technical computing
(_) | (_) (_) | Documentation: https://docs.julialang.org
_ _ _| |_ __ _ | Type "?help" for help.
| | | | | | |/ _` | |
| | |_| | | | (_| | | Version 0.7.0-DEV.3420 (2018-01-16 17:34 UTC)
_/ |\__'_|_|_|\__'_| | Commit aea9155* (0 days old master)
|__/ | x86_64-linux-gnu
julia> using FixedPointNumbers
julia> using Test
julia> Test.detect_ambiguities(FixedPointNumbers, Base, Core)
8-element Array{Tuple{Method,Method},1}:
((::Type{Fixed{T,f}})(x) where {T, f} in FixedPointNumbers at /home/lobi/.julia/v0.7/FixedPointNumbers/src/fixed.jl:8, (::Type{T})(z::Complex) where T<:Real in Base at complex.jl:37)
((::Type{Normed{T,f}})(x) where {T, f} in FixedPointNumbers at /home/lobi/.julia/v0.7/FixedPointNumbers/src/normed.jl:8, (::Type{T})(x::T) where T<:Number in Core at boot.jl:686)
((::Type{Fixed{T,f}})(x) where {T, f} in FixedPointNumbers at /home/lobi/.julia/v0.7/FixedPointNumbers/src/fixed.jl:8, (::Type{T})(x::Char) where T<:Number in Core at boot.jl:684)
((::Type{Normed{T,f}})(x) where {T, f} in FixedPointNumbers at /home/lobi/.julia/v0.7/FixedPointNumbers/src/normed.jl:8, (::Type{T})(z::Complex) where T<:Real in Base at complex.jl:37)
((::Type{Normed{T,f}})(x) where {T, f} in FixedPointNumbers at /home/lobi/.julia/v0.7/FixedPointNumbers/src/normed.jl:8, (::Type{T})(x::Base.TwicePrecision) where T<:Number in Base at twiceprecision.jl:243)
((::Type{Normed{T,f}})(x) where {T, f} in FixedPointNumbers at /home/lobi/.julia/v0.7/FixedPointNumbers/src/normed.jl:8, (::Type{T})(x::Char) where T<:Number in Core at boot.jl:684)
((::Type{Fixed{T,f}})(x) where {T, f} in FixedPointNumbers at /home/lobi/.julia/v0.7/FixedPointNumbers/src/fixed.jl:8, (::Type{T})(x::Base.TwicePrecision) where T<:Number in Base at twiceprecision.jl:243)
((::Type{Fixed{T,f}})(x) where {T, f} in FixedPointNumbers at /home/lobi/.julia/v0.7/FixedPointNumbers/src/fixed.jl:8, (::Type{T})(x::T) where T<:Number in Core at boot.jl:686)
Here's an example:
julia> f = x->begin
i=0;
for k in eachindex(x)
i+=x[k]
end
end
(::#21) (generic function with 1 method)
julia> d = rand(N2f14, 100,100,100,100);
julia> @time f(d);
11.956324 seconds (700.00 M allocations: 10.431 GB, 10.25% gc time)
julia> using Images
julia> @time f(rawview(d));
0.128216 seconds (1.73 k allocations: 81.318 KB)
I'm getting by for now with using Images.rawview
when iterating through N2f14
arrays, but I expected that wouldn't be necessary.
As described in https://discourse.julialang.org/t/ann-plans-for-removing-packages-that-do-not-yet-support-1-0-from-the-general-registry/ we are planning on removing packages that do not support 1.0 from the General registry. This package has been detected to not support 1.0 and is thus slated to be removed. The removal of packages from the registry will happen approximately a month after this issue is open.
To transition to the new Pkg system using Project.toml
, see https://github.com/JuliaRegistries/Registrator.jl#transitioning-from-require-to-projecttoml.
To then tag a new version of the package, see https://github.com/JuliaRegistries/Registrator.jl#via-the-github-app.
If you believe this package has erroneously been detected as not supporting 1.0 or have any other questions, don't hesitate to discuss it here or in the thread linked at the top of this post.
@timholy is this intended behaviour?
current master
convert(UFixed8,0xff) = UFixed8(0.004)
convert(UFixed8,0x00) = UFixed8(0.0)
convert(UFixed8,0x01) = UFixed8(1.0)
reinterpret(UFixed8,0xff) = UFixed8(1.0)
reinterpret(UFixed8,0x00) = UFixed8(0.0)
reinterpret(UFixed8,0x01) = UFixed8(0.004)
Thinking I might have broken that I looked at a previous commit
at bb3dee7
convert(UFixed8,0xff) = Ufixed8(0.004)
convert(UFixed8,0x00) = Ufixed8(0.0)
convert(UFixed8,0x01) = Ufixed8(1.0)
reinterpret(UFixed8,0xff) = Ufixed8(1.0)
reinterpret(UFixed8,0x00) = Ufixed8(0.0)
reinterpret(UFixed8,0x01) = Ufixed8(0.004)
I'd like to move the whole JuliaImages stack to version numbers >= 1.0 (see JuliaImages/Images.jl#825). Let's use this issue to collect plans for breaking changes in the near future.
Breaking changes include things like renamings, returning a different value than we used to (e.g., #129 might qualify, though rounding errors are a big ambiguous), or removing externally-visible functionality. Bug fixes, new features, performance improvements, etc., do not count as breaking, though of course they are very welcome.
Did FixedPoint arithmetic change recently? I'm fairly certain that this
using FixedPointNumbers: UFixed
a = UFixed{UInt8, 8}(1.0)
b = UFixed{UInt8, 8}(0.65)
@show a b a*b
> a = UFixed8(1.0)
> b = UFixed8(0.651)
> a * b = UFixed8(0.647)
was not the case until I Pkg.updated today. It's a bit of a problem for my use case...
If I have a 24-bit fixed-point number x
I might represent it as a Fixed{Int32, 23}
. If I later wanted to widen it to a 32-bit number to get extra precision, I might try convert(Fixed{Int32, 31}, x
, but that doesn't currently have a method. We could implement it just as a left shift.
I'm not sure what the best overflow checking behavior would be - in this case we'd worry about any values over 1 that would get lost. For a right-shift we'd get extra headroom but lose precision. Currently the float-to-fixed behavior is to throw an InexactError
on an overflowing conversion but not when losing precision, so we could match that behavior and only need to check on left-shifts.
julia> A = N0f8(0.5)
0.502N0f8
julia> 2*A*A ≈ 2*A^2 # they don't equal because of the operation priority
false
This will introduce some hard-to-find bug for future usage.
I think we need to guarantee that *(::Normed, ::Any)::Normed
to avoid this.
convert(Fixed{Int8, 7}, x)
seems to happily round to the nearest representable value for most values of x
, but throws an InexactError
for x=0.999
. AFAICT it throws the error when the value would get rounded up to 1.0
, but 1.0
can't be represented. Is this expected behavior? I would expect that in this case it would act the same as convert(Fixed{Int8, 7}, 1.0)
, which overflows to -1.0
.
julia> convert(Fixed{Int8, 7}, 0.8)
FixedPointNumbers.Fixed{Int8,7}(0.797)
julia> convert(Fixed{Int8, 7}, 0.9)
FixedPointNumbers.Fixed{Int8,7}(0.898)
julia> convert(Fixed{Int8, 7}, 0.999)
ERROR: InexactError()
in trunc at float.jl:357
in convert at /Users/srussell/.julia/v0.4/FixedPointNumbers/src/fixed.jl:34
julia> convert(Fixed{Int8, 7}, 1.0)
FixedPointNumbers.Fixed{Int8,7}(-1.0)
should convert
accept values that can't be exactly represented? My understanding was that normally convert(T, x)
is supposed to preserve information and thrown an error if the value can't be represented, and round(T, x)
is supposed to be used to round to the nearest representable number, but I don't think that's explicitly said, so I could be wrong there.
I have confirmed that Julia v1.2.0 and v1.3.0 give almost similar results on Normed
->Float
conversions (#129, #138). However, I found the performance regression (~2x - 3x slower) on x84_64 machines in the following cases:
Vec4{N0f32}
-> Vec4{Float32}
Vec4{N0f64}
-> Vec4{Float32}
Vec4{N0f64}
-> Vec4{Float64}
(cf. #129 (comment))
I'm not going to rush to investigate the cause or fix this problem. I submit this issue as a placeholder in case any useful information is found.
I recently had to add several differences of N0f8
values to finally take the abs
of it. Thus I tried to do
#a,b of type N0f8
x = zero(Q7f8)
x += a - b
which probably would've failed due to the difference being negative and thus not representable.
Thus the next step for me would've been to convert a,b
to Q7f8
which didn't work.
Thus my question: Is there any reason why we can't mix up these types?
btw promote(N0f8(.5), Q7f8(.5))
throws an error.
julia> using FixedPointNumbers
INFO: Precompiling module FixedPointNumbers...
ERROR: LoadError: LoadError: error in method definition: function Base.minmax must be explicitly imported to be extended
in include(::UTF8String) at ./boot.jl:264
in include_from_node1(::ASCIIString) at ./loading.jl:417
in include(::ASCIIString) at ./boot.jl:264
in include_from_node1(::ASCIIString) at ./loading.jl:417
in eval(::Module, ::Any) at ./boot.jl:267
[inlined code] from ./sysimg.jl:14
in process_options(::Base.JLOptions) at ./client.jl:239
in _start() at ./client.jl:318
while loading /home/synthetica/.julia/v0.5/FixedPointNumbers/src/ufixed.jl, in expression starting on line 131
while loading /home/synthetica/.julia/v0.5/FixedPointNumbers/src/FixedPointNumbers.jl, in expression starting on line 59
ERROR: Failed to precompile FixedPointNumbers to /home/synthetica/.julia/lib/v0.5/FixedPointNumbers.ji
in error(::ASCIIString) at ./error.jl:21
in compilecache(::ASCIIString) at ./loading.jl:496
in require(::Symbol) at ./loading.jl:355
in eval(::Module, ::Any) at ./boot.jl:267
This error can be fixed by adding import Base.minmax
to the file ufixed.jl
. (I don't want to deal with git right now.)
I commented on the issue #8199 and found the source of a REPL crash in the ufixed.jl
file. Lines 156-165 somehow interfere with a base UnionType.
My error is:
julia> using FixedPointNumbers
julia> writedlm(ERROR: type UnionType has no field body
in show at show.jl:80 (repeats 2 times)
in print_to_string at ./string.jl:24
in argtype_decl at methodshow.jl:18
in arg_decl_parts at methodshow.jl:30
in show at methodshow.jl:36
in print_to_string at string.jl:24
in string at string.jl:31
in complete_methods at ./REPLCompletions.jl:144
in completions at ./REPLCompletions.jl:207
in completions_3B_3707 at /home/fl/Programs/julia/usr/bin/../lib/julia/sys.so
in complete_line at REPL.jl:280
in complete_line at LineEdit.jl:141
in complete_line at LineEdit.jl:139
in anonymous at LineEdit.jl:1175
in anonymous at LineEdit.jl:1197
in prompt! at ./LineEdit.jl:1397
in run_interface at ./LineEdit.jl:1372
in run_interface_3B_3724 at /home/fl/Programs/julia/usr/bin/../lib/julia/sys.so
in run_frontend at ./REPL.jl:819
in run_repl at ./REPL.jl:170
in _start at ./client.jl:399
in _start_3B_3590 at /home/fl/Programs/julia/usr/bin/../lib/julia/sys.so
After commenting out the show function in ufixed.jl, i don't receive that
error anymore.
# Show
function show(io::IO, x::Ufixed)
print(io, "Ufixed", nbitsfrac(typeof(x)))
print(io, "(")
showcompact(io, x)
print(io, ")")
end
showcompact(io::IO, x::Ufixed) = show(io, round(convert(Float64,x), iceil(nbitsfrac(typeof(x))/_log2_10)))
show{T<:Ufixed}(io::IO, ::Type{T}) = print(io, "Ufixed", nbitsfrac(T))
My julia version:
julia> versioninfo()
Julia Version 0.4.0-dev+5127
Commit 6277015* (2014-09-05 03:57 UTC)
DEBUG build
Platform Info:
System: Linux (x86_64-unknown-linux-gnu)
CPU: Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz
WORD_SIZE: 64
BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Sandybridge)
LAPACK: libopenblas
LIBM: libopenlibm
LLVM: libLLVM-3.3
I checked if this error is reproducible in the stable .3 release: it is.
Running test in Julia v0.3:
julia> Pkg.test("FixedPointNumbers")
INFO: Testing FixedPointNumbers
ERROR: test failed: one(T) == 1
while loading \.julia\v0.3\FixedPointNumbers\test\ufixed.jl, in expression starting on line 20
while loading \.julia\v0.3\FixedPointNumbers\test\runtests.jl, in expression starting on line 2
...
ERROR: FixedPointNumbers had test errors
julia> one(Ufixed8)
Ufixed8(1.0)
julia> ans == 1
false
However, the following code works as expected:
julia> f() = Ufixed8(1.0) == 1
f (generic function with 2 methods)
julia> f()
true
julia> apply(==, promote(one(Ufixed8), 1))
true
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.