Coder Social home page Coder Social logo

fixedpointnumbers.jl's People

Contributors

andreasnoack avatar bjarthur avatar dependabot[bot] avatar femtocleaner[bot] avatar fredrikekre avatar hyrodium avatar inkydragon avatar jeffbezanson avatar johnnychen94 avatar juliatagbot avatar keno avatar kimikage avatar oscardssmith avatar pwl avatar ralphas avatar ranocha avatar rdeits avatar sacha0 avatar samuelpowell avatar schmrlng avatar scls19fr avatar simonbyrne avatar ssfrr avatar staticfloat avatar stevengj avatar timholy avatar tkelman avatar vchuravy avatar yuyichao avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

fixedpointnumbers.jl's Issues

`realmin` incorrect

The definition of realmin seems to be strange for fixed point numbers. The observed behavior is:

julia> realmin(Q11f4)
-2048.0Q11f4

But the expected behavior is:

julia> realmin(Q11f4)
0.06Q11f4

as per docstring,

help?> realmin
search: realmin realmax readdlm ReadOnlyMemoryError

  realmin(T)

  The smallest in absolute value non-subnormal value representable by the given floating-point
  DataType T.

(maybe this function is redundant with eps and need not be defined?)

`rem` returns zero for negative float input on ARMv7

To check the modification for the issue #129, I ran the tests on a 32-bit ARMv7 system (RPi 2 Model B v1.2). And then, I faced a problem with rem (%).

modulus: Test Failed at ~/.julia/dev/FixedPointNumbers/test/normed.jl:148
  Expression: (-0.3 % N0f8).i == round(Int, -0.3 * 255) % UInt8
   Evaluated: 0x00 == 0xb4
modulus: Test Failed at ~/.julia/dev/FixedPointNumbers/test/normed.jl:154
  Expression: (-0.3 % N6f10).i == round(Int, -0.3 * 1023) % UInt16
   Evaluated: 0x0000 == 0xfecd

The cause is the behavior of unsafe_trunc.

rem(x::Real, ::Type{T}) where {T <: Normed} = reinterpret(T, _unsafe_trunc(rawtype(T), round(rawone(T)*x)))

_unsafe_trunc(::Type{T}, x::Integer) where {T} = x % T
_unsafe_trunc(::Type{T}, x) where {T} = unsafe_trunc(T, x)

julia> versioninfo()
Julia Version 1.0.3
Platform Info:
  OS: Linux (arm-linux-gnueabihf)
  CPU: ARMv7 Processor rev 4 (v7l)
  WORD_SIZE: 32
  LIBM: libopenlibm
  LLVM: libLLVM-6.0.0 (ORCJIT, cortex-a53)

julia> unsafe_trunc(UInt8, -76.0) # or the intrinsic `fptoui`
0x00

julia> unsafe_trunc(Int8, -76.0)
-76

julia> unsafe_trunc(UInt8, unsafe_trunc(Int8, -76.0))
0xb4

(The problem occurs not only on v1.0.3 but also on v1.0.5 and v1.2.0. I have not tried the 64-bit.)

Although the behavior of unsafe_trunc may not be what we want, this is not a bug.

If the value is not representable by T, an arbitrary value will be returned.

https://docs.julialang.org/en/v1/base/math/#Base.unsafe_trunc

However, I don't think it is good to make the rem users aware of the internal unsafe_trunc.
The workaround is to convert the value to Signed temporarily as shown above.

BTW, the behavior of Normed's rem, which is specified by the above tests seems to be not intuitive. (Since I know the inside of Normed, I think the behavior is reasonable, though.)
So, I think it is another option to eliminate the tests for negative float inputs, i.e. make it an undefined behavior.

Rounding errors in `Normed` to `Float` conversions

While I was reviewing the PR #123, I found a minor bug.

julia> float(3N3f5) # this is ok
3.0f0

julia> float(3N4f12)
3.0000002f0

julia> float(3N8f8)
3.0000002f0

julia> float(3N10f6)
3.0000002f0

julia> float(3N12f4)
3.0000002f0

julia> float(3N13f3)
3.0000002f0

julia> float(3N2f6)
3.0000002f0

julia> float(3N4f4)
3.0000002f0

julia> float(3N5f3)
3.0000002f0

I think this is a problem with the division optimization.

function (::Type{T})(x::Normed) where {T <: AbstractFloat}
y = reinterpret(x)*(one(rawtype(x))/convert(T, rawone(x)))
convert(T, y) # needed for types like Float16 which promote arithmetic to Float32
end

The rounding errors can occur everywhere, but especially the errors at integer values are critical.

julia> isinteger(float(3N5f3))
false

`testapprox` should be moved or inlined

The "fixed" testset depends on testapprox which is defined in "test/normed.jl" (formerly named ufixed.jl).

@testset "testapprox" begin
for T in [Fixed{Int8,7}, Fixed{Int16,8}, Fixed{Int16,10}]
testapprox(T) # defined in ufixed.jl
end
end

function testapprox(::Type{T}) where {T}
for x = typemin(T):eps(T):typemax(T)-eps(T)
y = x+eps(T)
@test x y
@test y x
@test !(x y+eps(T))
end
end

This thwarts the selective execution of tests and causes an error(UndefVarError: testapprox not defined) in certain environments (e.g. Julia v1.0.3 32-bit ARMv7).

I think testapprox is not so complicated. It may be a good idea to inline the function or define it locally.
This may be off topic, but I also think there are too many test cases generated by testapprox (or its friend testtrunc). @test should be outside the loop.

bad conversions for Int32 on 32-bit machines

Related to my testing in #103, but this seems more straightforward, so I figured it might be cleaner to separate it into its own issue.

julia> typemax(Fixed{Int8, 7})
0.992Q0f7

julia> typemax(Fixed{Int16, 15})
0.99997Q0f15

julia> typemax(Fixed{Int32, 31})
-0.9999999995Q0f31

julia> typemax(Fixed{Int64, 63})
InfQ0f63

julia> Fixed{Int32, 31}(0.2)
-1.7999999998Q0f31

julia> versioninfo()
Julia Version 0.6.2
Commit d386e40c17 (2017-12-13 18:08 UTC)
Platform Info:
  OS: Linux (i686-pc-linux-gnu)
  CPU: Intel(R) Core(TM) i7-7500U CPU @ 2.70GHz
  WORD_SIZE: 32
  BLAS: libopenblas (DYNAMIC_ARCH NO_AFFINITY Prescott)
  LAPACK: libopenblas
  LIBM: libopenlibm
  LLVM: libLLVM-3.9.1 (ORCJIT, broadwell)

Failing to load FixedPointNumbers

I am seeing this in JuliaBox. julia-0.4-rc2

Doing a using FixedPointNumber throws:

INFO: Recompiling stale cache file /home/juser/.julia/lib/v0.4/FixedPointNumbers.ji for module FixedPointNumbers.
WARNING: Module Compat uuid did not match cache file
WARNING: deserialization checks failed while attempting to load cache from /home/juser/.julia/lib/v0.4/FixedPointNumbers.ji
INFO: Precompiling module FixedPointNumbers...
INFO: Recompiling stale cache file /home/juser/.julia/lib/v0.4/FixedPointNumbers.ji for module FixedPointNumbers.
WARNING: Module Compat uuid did not match cache file
LoadError: __precompile__(true) but require failed to create a precompiled cache file
while loading In[9], in expression starting on line 2

 in require at ./loading.jl:252
 in stale_cachefile at loading.jl:439
 in recompile_stale at loading.jl:457
 in _require_from_serialized at loading.jl:83
 in _require_from_serialized at ./loading.jl:109
 in require at ./loading.jl:219
 in stale_cachefile at loading.jl:439
 in recompile_stale at loading.jl:457
 in _require_from_serialized at loading.jl:83
 in _require_from_serialized at ./loading.jl:109
 in require at ./loading.jl:219

Doing a using Compat succeeds. Pkg.status is as follows:

1 required packages:
 - ProfileView                   0.1.1
14 additional packages:
 - BinDeps                       0.3.17
 - Cairo                         0.2.31
 - ColorTypes                    0.1.6
 - Colors                        0.5.4
 - Compat                        0.7.3
 - Docile                        0.5.19
 - FixedPointNumbers             0.0.11
 - Graphics                      0.1.3
 - Gtk                           0.9.2
 - GtkUtilities                  0.0.5
 - HttpCommon                    0.2.4
 - Reexport                      0.0.3
 - SHA                           0.1.2
 - URIParser                     0.1.0

Parsing issue with uf8 (etc.) suffixes

This might belong to Julia, since it seems to be related to parsing.

julia> 0x80uf8^2
ERROR: `*` has no method matching *(::UfixedConstructor{Uint8,8}, ::UfixedConstructor{Uint8,8})
 in power_by_squaring at intfuncs.jl:56
 in ^ at intfuncs.jl:86

julia> (0x80uf8)^2
0.25196466f0

For comparison:

julia> 1.23f0^2
1.5129f0

Convert for arrays

x=rand(Float32,100,100)

julia> convert(Ufixed16,x)
ERROR: convert has no method matching convert(::Type{UfixedBase{Uint16,16}}, ::Array{Float32,2})
in convert at base.jl:13

However:

julia> convert(Ufixed16,x[1,1])
Ufixed16(0.31148)

I am new to Julia, so may I'm just doing something wrong?

Possible extension for unsigned integer types?

For use in Color/Images, I'm considering implementing a set of types that I was thinking of calling Fixed8, Fixed10, Fixed12, Fixed14, and Fixed16. The idea behind these is to allow Uint8 or Uint16s to be used to represent image pixels, but have Fixed8(0xff) == 1.0 evaluate as true. (The 10, 12, 14, and 16 versions would be used for images collected with 10, 12, 14, and 16-bit cameras, but all would use an underlying Uint16 data type.) That would make sure that the numeric/bitwise representation was decoupled from the meaning of "white" in an RGB color sense. See discussion in JuliaAttic/Color.jl#42.

I'm wondering if this functionality should go into this package, or whether it's incompatible with your goals and hence should be a different package.

Overflow checked arithmetics

As discussed in https://github.com/JuliaLang/julia/issues/15690 with @timholy I've been recently bitten by the inconsistency regarding overflow in FixedPoint arithmetic when dealing with Images.jl.

FixedPointNumbers are pretending to be <:Real numbers, but their behavior regarding arithmetic is more like Integer. I just cannot think of any use-case one would get any advantage from the "modulo" arithmetic when dealing with the numbers guaranteed to fall within <0, 1> range. I'd be glad if my code stopped by an OverflowError exception indicating problem in my algorithm. Thus, easy fixable by manual widening of the operands. With the current silent overflow, algorithm just continues, giving finally wrong results.

Before writing a PR to introduce arithmetic that throws on overflow or underflow, I'd like to know your opinion on this change. I was also thinking of using a global "flag" to dynamically opt-in for the overflow-checking behavior but i'm worried about the branching costs. Is it possible e.g. use traits for this behavior change?

Optimizing `Normed` -> `Normed` conversions

As I suggested here, the current Normed -> Normed conversions are inefficient in some cases.

function Normed{T,f}(x::Normed{T2}) where {T <: Unsigned,T2 <: Unsigned,f}
U = Normed{T,f}
y = round((rawone(U)/rawone(x))*reinterpret(x))
(0 <= y) & (y <= typemax(T)) || throw_converterror(U, x)
reinterpret(U, _unsafe_trunc(T, y))
end

The current conversion method has two problems:

  1. It always checks the input range even if there is no need, and may throw the exception.
  2. It always uses floating-point operations even if there is no need.

The former means that the method is not SIMD-suitable.
Regarding the latter, the conversion between types with the same f is already specialized.

Normed{T1,f}(x::Normed{T2,f}) where {T1 <: Unsigned,T2 <: Unsigned,f} = Normed{T1,f}(convert(T1, x.i), 0)

There is also the N0f8->N0f16 specialization. (I wonder why.)
N0f16(x::N0f8) = reinterpret(N0f16, convert(UInt16, 0x0101*reinterpret(x)))

I do not think these are urgent problems. However, the optimization may be useful in the future to speed up the accumulation (reduce). And I just found a (ugly) workaround for the constant division problem, so I am writing this issue as a memorandum or reminder.

The figures below visualize the cases where the optimization is available.

  • The positive (greenish) area means that the conversion is overflow-safe (i.e. there is no need for the range checking).
  • The negative (reddish) area means that the conversion is unsafe (i.e. it may throw the exception).
  • The deep-colored cells means that the conversion does not need floating-point operations.
    • As mentioned above, the f1 == f2 lines are already supported.

n8_n16
n8_n32
n16_n32

You can get the result of other cases with the following script:

using Gadfly, Colors

set_default_plot_size(15cm, 8cm)

function mat(dest, src)
    b1, b2 = 8*sizeof(dest), 8*sizeof(src)
    if b1 > b2 # widening
        safe = [b1-f1 > b2-f2 || b2 == f2 ? 1 : -1 for f2=1:b2, f1=1:b1]
    else
        safe = [b1-f1 >= b2-f2 ? 1 : -1 for f2=1:b2, f1=1:b1]
    end
    safe .* [isinteger(f1/f2) ? f2/f1 : 1/36 for f2=1:b2, f1=1:b1]
end;

function plot_mat(dest, src)
    s = Scale.color_continuous(
        colormap=Scale.lab_gradient(LCHab(0, 100, 20), "white", LCHab(30, 100, 200)),
        minvalue=-1, maxvalue=1)
    m = mat(dest, src)
    spy(m, 
        Guide.title("Normed{$src,f2} -> Normed{$dest,f1}"),
        Guide.xlabel("f1"), Guide.xticks(ticks=axes(m, 2)),
        Guide.ylabel("f2"), Guide.yticks(ticks=axes(m, 1)),
        Guide.colorkey(title="scale"), s,
        Theme(plot_padding=[0mm,2mm,5mm,0mm]))
end;

plot_mat(UInt16, UInt8)
plot_mat(UInt32, UInt8)
plot_mat(UInt32, UInt16)

Edit: The safe areas are wrong. I forgot to take account of the "carry" or "overlapping". I will soon fix the figures and the script above. Updated.

isinteger(::Normed) incorrect

The code currently assumes that the denominator is 2^f, which is not the case for Normed. e.g.

julia> isinteger(Normed{UInt8,7}(1))
false

I believe you need to change

isinteger(x::FixedPoint{T,f}) where {T,f} = (x.i&(1<<f-1)) == 0

to

isinteger(x::Fixed{T,f}) where {T,f} = (x.i&(1<<f-1)) == 0
isinteger(x::Normed{T,f}) where {T,f} = (x.i%(1<<f-1)) == 0

ERROR: LoadError: UndefVarError: promote_sys_size not defined

               _
   _       _ _(_)_     |  A fresh approach to technical computing
  (_)     | (_) (_)    |  Documentation: https://docs.julialang.org
   _ _   _| |_  __ _   |  Type "?help" for help.
  | | | | | | |/ _` |  |
  | | |_| | | | (_| |  |  Version 0.7.0-DEV.3266 (2018-01-04 16:44 UTC)
 _/ |\__'_|_|_|\__'_|  |  Commit 93454e2* (0 days old master)
|__/                   |  x86_64-linux-gnu

julia> Pkg.test("FixedPointNumbers.jl")
[ Info: Testing FixedPointNumbers @ Base.Pkg.Entry entry.jl:723
ERROR: LoadError: UndefVarError: promote_sys_size not defined
Stacktrace:
 [1] getproperty(::Module, ::Symbol) at ./sysimg.jl:14
 [2] top-level scope at /home/lobi/.julia/v0.7/FixedPointNumbers/src/FixedPointNumbers.jl:140
 [3] include at ./boot.jl:295 [inlined]
 [4] include_relative(::Module, ::String) at ./loading.jl:521
 [5] include(::Module, ::String) at ./sysimg.jl:26
 [6] top-level scope
 [7] eval at ./boot.jl:298 [inlined]
 [8] top-level scope at ./<missing>:2
in expression starting at /home/lobi/.julia/v0.7/FixedPointNumbers/src/FixedPointNumbers.jl:140
ERROR: LoadError: Failed to precompile FixedPointNumbers to /home/lobi/.julia/lib/v0.7/FixedPointNumbers.ji.
Stacktrace:
 [1] error at ./error.jl:33 [inlined]
 [2] compilecache(::String) at ./loading.jl:648
 [3] compilecache at ./loading.jl:605 [inlined]
 [4] _require(::Symbol) at ./loading.jl:460
 [5] require(::Symbol) at ./loading.jl:333
 [6] include at ./boot.jl:295 [inlined]
 [7] include_relative(::Module, ::String) at ./loading.jl:521
 [8] include(::Module, ::String) at ./sysimg.jl:26
 [9] process_options(::Base.JLOptions) at ./client.jl:323
 [10] _start() at ./client.jl:374
in expression starting at /home/lobi/.julia/v0.7/FixedPointNumbers/test/runtests.jl:1
┌ Error: ------------------------------------------------------------
│ # Testing failed for FixedPointNumbers
│   exception = ErrorException("failed process: Process(`/home/lobi/julia07/usr/bin/julia -Cnative -J/home/lobi/julia07/usr/lib/julia/sys.so --compile=yes --depwarn=yes --code-coverage=none --color=yes --compiled-modules=yes --check-bounds=yes --warn-overwrite=yes --startup-file=yes /home/lobi/.julia/v0.7/FixedPointNumbers/test/runtests.jl`, ProcessExited(1)) [1]")
└ @ Base.Pkg.Entry entry.jl:739
ERROR: FixedPointNumbers had test errors

Could you tag a new version for Julia 0.6?

The latest tagged version (i.e. v0.3.0) generates zillions of deprecated warnings. I think you already notice that and a new version will be released sooner or later, but just in case.

support reinterpret from UIntX to Fixed{IntX, N}

Currently if you have a UInt16 that you want to reinterpret as a signed fixed-point number the reinterpret method here doesn't apply, so you get an error:

julia> reinterpret(Fixed{Int16, 15}, 0x8123)
ERROR: bitcast: target type not a leaf primitive type
Stacktrace:
 [1] reinterpret(::Type{FixedPointNumbers.Fixed{Int16,15}}, ::UInt16) at ./essentials.jl:155
 [2] eval(::Module, ::Any) at ./boot.jl:235

Is it worth having another reinterpret method to handle this, or is it safer to have the user double-reinterpret? (e.g. reinterpret(Fixed{Int16, 15}, reinterpret(Int16, 0x8123)))

One proposed method would be:

function Base.reinterpret(::Type{Fixed{T,f}}, x::Unsigned) where {T <: Signed,f}
    reinterpret(Fixed{T, f}, reinterpret(T, x))
end

Or we could even drop the requirement the x subtypes Unsigned if we want to reinterpret more broadly.

Ambiguity warnings in 0.4

#55 was closed in sept 2016, however i see the same with

               _
   _       _ _(_)_     |  A fresh approach to technical computing
  (_)     | (_) (_)    |  Documentation: http://docs.julialang.org
   _ _   _| |_  __ _   |  Type "?help" for help.
  | | | | | | |/ _` |  |
  | | |_| | | | (_| |  |  Version 0.4.5 (2016-03-18 00:58 UTC)
 _/ |\__'_|_|_|\__'_|  |  
|__/                   |  x86_64-linux-gnu

julia> Pkg.init()
INFO: Initializing package repository /home/lobi/.julia/v0.4
INFO: Cloning METADATA from git://github.com/JuliaLang/METADATA.jl

julia> Pkg.add("Cairo")
INFO: Installing BinDeps v0.4.5
INFO: Installing Cairo v0.2.35
INFO: Installing ColorTypes v0.2.12
INFO: Installing Colors v0.6.9
INFO: Installing Compat v0.19.0
INFO: Installing FixedPointNumbers v0.2.1
INFO: Installing Graphics v0.1.3
INFO: Installing Reexport v0.0.3
INFO: Installing SHA v0.3.1
INFO: Installing URIParser v0.1.8
INFO: Building Cairo
INFO: Recompiling stale cache file /home/lobi/.julia/lib/v0.4/BinDeps.ji for module BinDeps.
INFO: Recompiling stale cache file /home/lobi/.julia/lib/v0.4/Compat.ji for module Compat.
INFO: Recompiling stale cache file /home/lobi/.julia/lib/v0.4/URIParser.ji for module URIParser.
INFO: Recompiling stale cache file /home/lobi/.julia/lib/v0.4/SHA.ji for module SHA.
INFO: Package database updated

julia> Pkg.test("Cairo")
INFO: Testing Cairo
INFO: Recompiling stale cache file /home/lobi/.julia/lib/v0.4/Cairo.ji for module Cairo.
INFO: Recompiling stale cache file /home/lobi/.julia/lib/v0.4/Colors.ji for module Colors.
INFO: Recompiling stale cache file /home/lobi/.julia/lib/v0.4/FixedPointNumbers.ji for module FixedPointNumbers.
WARNING: New definition 
    floattype(Type{#T<:FixedPointNumbers.Fixed}) at /home/lobi/.julia/v0.4/FixedPointNumbers/src/fixed.jl:16
is ambiguous with: 
    floattype(Type{FixedPointNumbers.FixedPoint{#T<:Union{Int8, UInt16, Int16, UInt8}, #f<:Any}}) at /home/lobi/.julia/v0.4/FixedPointNumbers/src/FixedPointNumbers.jl:89.
To fix, define 
    floattype(Type{FixedPointNumbers.Fixed{_<:Union{Int8, Int16}, #f<:Any}})
before the new definition.
WARNING: New definition 
    floattype(Type{#T<:FixedPointNumbers.UFixed}) at /home/lobi/.julia/v0.4/FixedPointNumbers/src/ufixed.jl:14
is ambiguous with: 
    floattype(Type{FixedPointNumbers.FixedPoint{#T<:Union{Int8, UInt16, Int16, UInt8}, #f<:Any}}) at /home/lobi/.julia/v0.4/FixedPointNumbers/src/FixedPointNumbers.jl:89.
To fix, define 
    floattype(Type{FixedPointNumbers.UFixed{_<:Union{UInt16, UInt8}, #f<:Any}})
before the new definition.
INFO: Recompiling stale cache file /home/lobi/.julia/lib/v0.4/ColorTypes.ji for module ColorTypes.
INFO: Recompiling stale cache file /home/lobi/.julia/lib/v0.4/Reexport.ji for module Reexport.
INFO: Recompiling stale cache file /home/lobi/.julia/lib/v0.4/Graphics.ji for module Graphics.
INFO: Cairo tests passed

julia> Pkg.status()
1 required packages:
 - Cairo                         0.2.35
9 additional packages:
 - BinDeps                       0.4.5
 - ColorTypes                    0.2.12
 - Colors                        0.6.9
 - Compat                        0.19.0
 - FixedPointNumbers             0.2.1
 - Graphics                      0.1.3
 - Reexport                      0.0.3
 - SHA                           0.3.1
 - URIParser                     0.1.8

deprecation warning with 0.4.0-dev+5791

   _       _ _(_)_     |  A fresh approach to technical computing
  (_)     | (_) (_)    |  Documentation: http://docs.julialang.org
   _ _   _| |_  __ _   |  Type "help()" for help.
  | | | | | | |/ _` |  |
  | | |_| | | | (_| |  |  Version 0.4.0-dev+5791 (2015-07-04 23:24 UTC)
 _/ |\__'_|_|_|\__'_|  |  Commit 59a1e9c* (3 days old master)
|__/                   |  x86_64-linux-gnu

julia> using FixedPointNumbers

WARNING: deprecated syntax "< {" at /home/lobi/.julia/v0.4/FixedPointNumbers/src/fixed32.jl:20.
Use "<{" instead.

WARNING: deprecated syntax "< {" at /home/lobi/.julia/v0.4/FixedPointNumbers/src/ufixed.jl:91.
Use "<{" instead.

can not convert float to the same or more number of bit fixed point

this works in Julia 0.6.2:

Float32(1.0) |> N0f8
Float32(1.0) |> N0f16

this will not work

Float32(1.0) |> N0f32
Float32(1.0) |> N0f64
Float64(1.0) |> N0f64

It looks like when the number of bits is the same or larger, it won’t work.
error message:

ArgumentError: FixedPointNumbers.Normed{UInt64,64} is a 64-bit type representing 0 values from 0.0 to 1.0; cannot represent 1.0

Move to JuliaMath?

Any interest in moving this package to JuliaMath? Orgs have pluses and minuses, but on balance they seem to be recommended for key pieces of infrastructure (which this definitely is).

Define `::Integer * ::FixedPoint`?

julia> 2 * 0.5Q1f14
ERROR: InexactError()
Stacktrace:
 [1] convert at /home/fengyang/.julia/v0.6/FixedPointNumbers/src/fixed.jl:42 [inlined]
 [2] promote at ./promotion.jl:174 [inlined]
 [3] *(::Int64, ::FixedPointNumbers.Fixed{Int16,14}) at ./promotion.jl:247

This is happening because 2 is not representable in Q1f14, even though the result 1 would be. I think it is avoidable by specializing ::Integer * ::FixedPoint (and ::FixedPoint * ::Integer?) directly, but I don't have much experience with this. If this is the right thing to do, I'll make a PR.

Commonizing code between `Fixed` and `Normed`

Fixed and Normed have evolved independently. Therefore, some functions are specialized even though they do not need specialization.

cf. ee5bd54...kimikage:commonize PR #151

I'm going to refactor the codes by taking the following 3 steps.

  1. Commonize the existing implementation code
  2. Add the functions which are implemented only in either (e.g. rounding functions for Fixed)
  3. Commonize the test codes

Although "test-first" is a good practice, the step 3. requires major renovations. Please give me your advice if you have any ideas.

Ambiguity warnings in 0.4

WARNING: New definition 
    convert(Type{FixedPointNumbers.UFixed{#T<:Any, #f<:Any}}, FixedPointNumbers.UFixed) at /home/travis/.julia/v0.4/FixedPointNumbers/src/ufixed.jl:46
is ambiguous with: 
    convert(Type{FixedPointNumbers.UFixed{#T1<:Any, #f<:Any}}, FixedPointNumbers.UFixed{#T2<:Any, #f<:Any}) at /home/travis/.julia/v0.4/FixedPointNumbers/src/ufixed.jl:44.
To fix, define 
    convert(Type{FixedPointNumbers.UFixed{#T<:Any, #f<:Any}}, FixedPointNumbers.UFixed{T<:Unsigned, #f<:Any})
before the new definition.
WARNING: New definition 
    convert(Type{FixedPointNumbers.UFixed{#T<:Any, #f<:Any}}, FixedPointNumbers.UFixed) at /home/travis/.julia/v0.4/FixedPointNumbers/src/ufixed.jl:46
is ambiguous with: 
    convert(Type{FixedPointNumbers.UFixed{#T1<:Any, #f<:Any}}, FixedPointNumbers.UFixed{#T2<:Any, #f<:Any}) at /home/travis/.julia/v0.4/FixedPointNumbers/src/ufixed.jl:44.

Fixed8, Fixed16, etc.

In audio it's common to use signed integer samples, which in FixedPointNumbers parlance becomes Fixed{Int16, 15} for 16-bit samples, or Fixed{Int8, 7} for 8-bit.

For my use cases, aliases like Fixed8, Fixed16 with the above definitions would be useful, but I see there's already a definition for Fixed16 to Fixed{Int32, 16}, and a deprecated Fixed32 that aliases to the current Fixed16 definition.

Would these definitions also work for the use-cases currently using the Fixed16 definition, or do they need the extra storage space for values outside [-1, 1)?

typealiases missing?

These are mentioned in the README, but don't appear to be there?

julia> using FixedPointNumbers

julia> Normed8
UndefVarError: Normed8 not defined

julia> Normed{UInt8, 8}
FixedPointNumbers.Normed{UInt8,8}

Ufixed8(0xff) isn't Ufixed8(1.0)

julia> Ufixed8(0xff)
Ufixed8(0.004)

Isn't it supposed to be Ufixed8(1.0)?

Julia Version 0.4.0-dev+4791
Commit a9b0135 (2015-05-12 15:32 UTC)
Platform Info:
  System: Linux (x86_64-linux-gnu)
  CPU: Intel(R) Core(TM) i7-3520M CPU @ 2.90GHz
  WORD_SIZE: 64
  BLAS: libopenblas (NO_LAPACK NO_LAPACKE DYNAMIC_ARCH NO_AFFINITY Sandybridge)
  LAPACK: liblapack.so.3
  LIBM: libopenlibm
  LLVM: libLLVM-3.3

julia runtests.jl completed without any errors though..?

8/10/12 bit FixedPoint numbers in the vein of Ufixed

In JuliaGraphics/ColorTypes.jl#11 the need for FixedPoint numbers with a different bit size from 32bit arises.

Ufixed is very flexible and nice for representing ColorValues, but it seems to me that its design is a bit different. Ufixed is defined from [0,1] and Fixed32 has a variable range depending on the number of fraction bits.

For my needs I wouldn't need a variable fraction bit number as all values are within [-1, 1]

Precision when converting a normed value

I was surprised to find that converting an integer-based normed value to Float64 is not equivalent to dividing out the integer:

julia> Float64(reinterpret(Normed{UInt16, 16}, UInt16(100))) == 100/2^16
false

Is this by design? I assumed the Float64 conversion would just do the division internally.

Removing the obsolete promotions for reductions

The current codebase has no compatibility with Julia v0.6, because the syntax for parametric methods has been fully changed to the current (i.e.where) style. I think the backporting is not easy. Therefore, the following codes are no longer useful.

if isdefined(Base, :r_promote)
# Julia v0.6
Base.r_promote(::typeof(+), x::FixedPoint{T}) where {T} = Treduce(x)
Base.r_promote(::typeof(*), x::FixedPoint{T}) where {T} = Treduce(x)
Base.reducedim_init(f::typeof(identity),
op::typeof(+),
A::AbstractArray{T}, region) where {T <: FixedPoint} =
Base.reducedim_initarray(A, region, zero(Treduce))
Base.reducedim_init(f::typeof(identity),
op::typeof(*),
A::AbstractArray{T}, region) where {T <: FixedPoint} =
Base.reducedim_initarray(A, region, oneunit(Treduce))
else

Removing the obsolete promotions may help improve the promotion rules. For example, the current promotions use const Treduce = Float64, but I don't think it is always pretty good. (I can expect the negative impact of the braking change to be significant, though.)
Moreover, it ostensibly improves the code coverage.

What is `scaledual`?

I am working on improving the accuracy of the conversions from Normed to Float (#129), and I am interested in scaledual, which seems to be related with the conversions.

The scaledual was introduced in d1087f7.

The introduction of scaledual is a bit speculative, but it can essentially double the speed of certain operations. It has the following property:

bd, ad = scaledual(b, a)
b*a == bd*ad

but the RHS might be faster (particularly for floating-point b and an array a of fixed-point numbers).

Originally posted by @timholy in #2 (comment)

However, in the current codebase, I think scaledual does not have the property above as its test specifies. (a[1] != af8[1])

a = rand(UInt8, 10)
rfloat = similar(a, Float32)
rfixed = similar(rfloat)
af8 = reinterpret(N0f8, a)
b = 0.5
bd, eld = scaledual(b, af8[1])
@assert b*a[1] == bd*eld

I do my best for #129, but a slowdown is inevitable. If scaledual is helpful as a workaround for people who prefer speed over accuracy, I feel relieved.

@timholy , did I not understand that correctly?

poor performance

Actually, this surprises me a bit, any hints/ideas on how to fix this?

A similar result can be observed for *,.*, and etc.

julia> x_n0f8 = rand(N0f8, 1000, 1000);

julia> x_float64 = rand(Float64, 1000, 1000);

julia> x_uint8 = rand(UInt8, 1000, 1000);

julia> @benchmark x_n0f8 .+ x_n0f8
BenchmarkTools.Trial: 
  memory estimate:  976.84 KiB
  allocs estimate:  6
  --------------
  minimum time:     3.547 ms (0.00% GC)
  median time:      3.732 ms (0.00% GC)
  mean time:        3.838 ms (1.11% GC)
  maximum time:     7.032 ms (0.00% GC)
  --------------
  samples:          1301
  evals/sample:     1

julia> @benchmark x_float64 .+ x_float64
BenchmarkTools.Trial: 
  memory estimate:  7.63 MiB
  allocs estimate:  4
  --------------
  minimum time:     1.214 ms (0.00% GC)
  median time:      1.346 ms (0.00% GC)
  mean time:        1.869 ms (29.19% GC)
  maximum time:     5.726 ms (71.81% GC)
  --------------
  samples:          2667
  evals/sample:     1

julia> @benchmark x_uint8 .+ x_uint8
BenchmarkTools.Trial: 
  memory estimate:  976.75 KiB
  allocs estimate:  4
  --------------
  minimum time:     80.220 μs (0.00% GC)
  median time:      87.781 μs (0.00% GC)
  mean time:        235.548 μs (18.39% GC)
  maximum time:     1.844 ms (69.85% GC)
  --------------
  samples:          10000
  evals/sample:     1

very strange and non-deterministic behavior

I'm troubleshooting some 32-bit failures and seeing some very strange things. I dug down for a while and narrowed it down to floats not getting converted to Fixed-point numbers correctly, e.g. I added this to my test:

x = 0.222538
@show x
f = Fixed{Int32, 31}(x)
@show f

and got the following appveyor output on 32-bit:

x = 0.222538
f = -1.777462Q0f31

Then in trying to put together a repro to add to the FixedPointNumbers tests, I saw strange behavior on my local machine (64-bit, running Julia 0.6.2). These were entered as-written here, back-to-back, at the REPL (in Atom):

julia> typemax(Fixed{Int8, 7})
0.292Q0f7

julia> typemax(Fixed{Int8, 7})
0.992Q0f7

Same expression, different results.

I added some more tests to my code:

    @testset "fractional fixed-point works" begin
        for T in (Fixed{Int8, 7},
                  Fixed{Int16, 15},
                  Fixed{Int32, 31},
                  Fixed{Int64, 63})
            tol = (typemax(T) + 1.0) / (sizeof(T) * 8)
            for x in linspace(-1, float(typemax(T))-tol, 100)
                @test abs(Fixed{Int16, 15}(x) - x) <= tol
            end
        end
    end

which works on my local 64-bit machine but throws an error on the appveyor 32-bit machine:

fractional fixed-point works: Error During Test
  Got an exception of type InexactError outside of a @test
  InexactError()
  Stacktrace:
   [1] trunc at .\float.jl:651
   [2] _linspace(::Float64, ::Float64, ::Int32) at .\twiceprecision.jl:349
   [3] linspace(::Float64, ::Float64, ::Int32) at .\twiceprecision.jl:338
   [4] linspace(::Int32, ::Float64, ::Int32) at .\range.jl:243
   [5] macro expansion at C:\Users\appveyor\.julia\v0.6\SampledSignals\test\WAVDisplay.jl:104 [inlined]
   ...

where WAVDisplay.jl:104 is the linspace line in the test above.

Sorry for the big dump here, I'm not sure exactly what I'm looking for, in the sense that the nondeterministic behavior makes me wonder if it's a Base thing, not just a FixedPointNumbers thing.

Is there a better way to test 32-bit behavior other than pushing to appveyor? Maybe a docker box?

runtests (in 0.7) fails with Test.detect_ambiguities(FixedPointNumbers, Base, Core)

i merged manually #100 into master and ran. Precompile works, but Pkg.test fails (obviously with Base.Test and Test as stdlib), but also


               _
   _       _ _(_)_     |  A fresh approach to technical computing
  (_)     | (_) (_)    |  Documentation: https://docs.julialang.org
   _ _   _| |_  __ _   |  Type "?help" for help.
  | | | | | | |/ _` |  |
  | | |_| | | | (_| |  |  Version 0.7.0-DEV.3420 (2018-01-16 17:34 UTC)
 _/ |\__'_|_|_|\__'_|  |  Commit aea9155* (0 days old master)
|__/                   |  x86_64-linux-gnu

julia> using FixedPointNumbers

julia> using Test

julia> Test.detect_ambiguities(FixedPointNumbers, Base, Core)
8-element Array{Tuple{Method,Method},1}:
 ((::Type{Fixed{T,f}})(x) where {T, f} in FixedPointNumbers at /home/lobi/.julia/v0.7/FixedPointNumbers/src/fixed.jl:8, (::Type{T})(z::Complex) where T<:Real in Base at complex.jl:37)                        
 ((::Type{Normed{T,f}})(x) where {T, f} in FixedPointNumbers at /home/lobi/.julia/v0.7/FixedPointNumbers/src/normed.jl:8, (::Type{T})(x::T) where T<:Number in Core at boot.jl:686)                            
 ((::Type{Fixed{T,f}})(x) where {T, f} in FixedPointNumbers at /home/lobi/.julia/v0.7/FixedPointNumbers/src/fixed.jl:8, (::Type{T})(x::Char) where T<:Number in Core at boot.jl:684)                           
 ((::Type{Normed{T,f}})(x) where {T, f} in FixedPointNumbers at /home/lobi/.julia/v0.7/FixedPointNumbers/src/normed.jl:8, (::Type{T})(z::Complex) where T<:Real in Base at complex.jl:37)                      
 ((::Type{Normed{T,f}})(x) where {T, f} in FixedPointNumbers at /home/lobi/.julia/v0.7/FixedPointNumbers/src/normed.jl:8, (::Type{T})(x::Base.TwicePrecision) where T<:Number in Base at twiceprecision.jl:243)
 ((::Type{Normed{T,f}})(x) where {T, f} in FixedPointNumbers at /home/lobi/.julia/v0.7/FixedPointNumbers/src/normed.jl:8, (::Type{T})(x::Char) where T<:Number in Core at boot.jl:684)                         
 ((::Type{Fixed{T,f}})(x) where {T, f} in FixedPointNumbers at /home/lobi/.julia/v0.7/FixedPointNumbers/src/fixed.jl:8, (::Type{T})(x::Base.TwicePrecision) where T<:Number in Base at twiceprecision.jl:243)  
 ((::Type{Fixed{T,f}})(x) where {T, f} in FixedPointNumbers at /home/lobi/.julia/v0.7/FixedPointNumbers/src/fixed.jl:8, (::Type{T})(x::T) where T<:Number in Core at boot.jl:686)                              

Array iteration is slow

Here's an example:

julia> f = x->begin 
              i=0;
              for k in eachindex(x)
                   i+=x[k]
              end
       end
(::#21) (generic function with 1 method)

julia> d = rand(N2f14, 100,100,100,100);

julia> @time f(d);
 11.956324 seconds (700.00 M allocations: 10.431 GB, 10.25% gc time)

julia> using Images
julia> @time f(rawview(d));
  0.128216 seconds (1.73 k allocations: 81.318 KB)

I'm getting by for now with using Images.rawview when iterating through N2f14 arrays, but I expected that wouldn't be necessary.

Info about upcoming removal of packages in the General registry

As described in https://discourse.julialang.org/t/ann-plans-for-removing-packages-that-do-not-yet-support-1-0-from-the-general-registry/ we are planning on removing packages that do not support 1.0 from the General registry. This package has been detected to not support 1.0 and is thus slated to be removed. The removal of packages from the registry will happen approximately a month after this issue is open.

To transition to the new Pkg system using Project.toml, see https://github.com/JuliaRegistries/Registrator.jl#transitioning-from-require-to-projecttoml.
To then tag a new version of the package, see https://github.com/JuliaRegistries/Registrator.jl#via-the-github-app.

If you believe this package has erroneously been detected as not supporting 1.0 or have any other questions, don't hesitate to discuss it here or in the thread linked at the top of this post.

Conversion between UFixed and UInt

@timholy is this intended behaviour?

current master

convert(UFixed8,0xff) = UFixed8(0.004)
convert(UFixed8,0x00) = UFixed8(0.0)
convert(UFixed8,0x01) = UFixed8(1.0)
reinterpret(UFixed8,0xff) = UFixed8(1.0)
reinterpret(UFixed8,0x00) = UFixed8(0.0)
reinterpret(UFixed8,0x01) = UFixed8(0.004)

Thinking I might have broken that I looked at a previous commit
at bb3dee7

convert(UFixed8,0xff) = Ufixed8(0.004)
convert(UFixed8,0x00) = Ufixed8(0.0)
convert(UFixed8,0x01) = Ufixed8(1.0)
reinterpret(UFixed8,0xff) = Ufixed8(1.0)
reinterpret(UFixed8,0x00) = Ufixed8(0.0)
reinterpret(UFixed8,0x01) = Ufixed8(0.004)

Road to 1.0

I'd like to move the whole JuliaImages stack to version numbers >= 1.0 (see JuliaImages/Images.jl#825). Let's use this issue to collect plans for breaking changes in the near future.

Breaking changes include things like renamings, returning a different value than we used to (e.g., #129 might qualify, though rounding errors are a big ambiguous), or removing externally-visible functionality. Bug fixes, new features, performance improvements, etc., do not count as breaking, though of course they are very welcome.

Multiplication by Ufixed{UInt8, 8}(1) is no longer an identity

Did FixedPoint arithmetic change recently? I'm fairly certain that this

using FixedPointNumbers: UFixed
a = UFixed{UInt8, 8}(1.0)
b = UFixed{UInt8, 8}(0.65)
@show a b a*b
> a = UFixed8(1.0)
> b = UFixed8(0.651)
> a * b = UFixed8(0.647)

was not the case until I Pkg.updated today. It's a bit of a problem for my use case...

support converting between different fixed-point representations

If I have a 24-bit fixed-point number x I might represent it as a Fixed{Int32, 23}. If I later wanted to widen it to a 32-bit number to get extra precision, I might try convert(Fixed{Int32, 31}, x, but that doesn't currently have a method. We could implement it just as a left shift.

I'm not sure what the best overflow checking behavior would be - in this case we'd worry about any values over 1 that would get lost. For a right-shift we'd get extra headroom but lose precision. Currently the float-to-fixed behavior is to throw an InexactError on an overflowing conversion but not when losing precision, so we could match that behavior and only need to check on left-shifts.

what should be the output type of `AbstractFloat * Normed`

julia> A = N0f8(0.5)
0.502N0f8

julia> 2*A*A  2*A^2 # they don't equal because of the operation priority
false

This will introduce some hard-to-find bug for future usage.

I think we need to guarantee that *(::Normed, ::Any)::Normed to avoid this.

`convert` sometimes throws InexactError, usually rounds

convert(Fixed{Int8, 7}, x) seems to happily round to the nearest representable value for most values of x, but throws an InexactError for x=0.999. AFAICT it throws the error when the value would get rounded up to 1.0, but 1.0 can't be represented. Is this expected behavior? I would expect that in this case it would act the same as convert(Fixed{Int8, 7}, 1.0), which overflows to -1.0.

julia> convert(Fixed{Int8, 7}, 0.8)
FixedPointNumbers.Fixed{Int8,7}(0.797)

julia> convert(Fixed{Int8, 7}, 0.9)
FixedPointNumbers.Fixed{Int8,7}(0.898)

julia> convert(Fixed{Int8, 7}, 0.999)
ERROR: InexactError()
 in trunc at float.jl:357
 in convert at /Users/srussell/.julia/v0.4/FixedPointNumbers/src/fixed.jl:34

julia> convert(Fixed{Int8, 7}, 1.0)
FixedPointNumbers.Fixed{Int8,7}(-1.0)

should convert accept values that can't be exactly represented? My understanding was that normally convert(T, x) is supposed to preserve information and thrown an error if the value can't be represented, and round(T, x) is supposed to be used to round to the nearest representable number, but I don't think that's explicitly said, so I could be wrong there.

Performance regression in `Normed` -> `Float` conversions on Julia v1.3.0

I have confirmed that Julia v1.2.0 and v1.3.0 give almost similar results on Normed->Float conversions (#129, #138). However, I found the performance regression (~2x - 3x slower) on x84_64 machines in the following cases:

  • Vec4{N0f32} -> Vec4{Float32}
  • Vec4{N0f64} -> Vec4{Float32}
  • Vec4{N0f64} -> Vec4{Float64}

(cf. #129 (comment))

I'm not going to rush to investigate the cause or fix this problem. I submit this issue as a placeholder in case any useful information is found.

mixing Fixed and Normed

I recently had to add several differences of N0f8 values to finally take the abs of it. Thus I tried to do

#a,b of type N0f8
x = zero(Q7f8)
x += a - b

which probably would've failed due to the difference being negative and thus not representable.
Thus the next step for me would've been to convert a,b to Q7f8 which didn't work.

Thus my question: Is there any reason why we can't mix up these types?
btw promote(N0f8(.5), Q7f8(.5)) throws an error.

Precompilation fails on 0.5.0-dev

julia> using FixedPointNumbers
INFO: Precompiling module FixedPointNumbers...
ERROR: LoadError: LoadError: error in method definition: function Base.minmax must be explicitly imported to be extended
 in include(::UTF8String) at ./boot.jl:264
 in include_from_node1(::ASCIIString) at ./loading.jl:417
 in include(::ASCIIString) at ./boot.jl:264
 in include_from_node1(::ASCIIString) at ./loading.jl:417
 in eval(::Module, ::Any) at ./boot.jl:267
 [inlined code] from ./sysimg.jl:14
 in process_options(::Base.JLOptions) at ./client.jl:239
 in _start() at ./client.jl:318
while loading /home/synthetica/.julia/v0.5/FixedPointNumbers/src/ufixed.jl, in expression starting on line 131
while loading /home/synthetica/.julia/v0.5/FixedPointNumbers/src/FixedPointNumbers.jl, in expression starting on line 59
ERROR: Failed to precompile FixedPointNumbers to /home/synthetica/.julia/lib/v0.5/FixedPointNumbers.ji
 in error(::ASCIIString) at ./error.jl:21
 in compilecache(::ASCIIString) at ./loading.jl:496
 in require(::Symbol) at ./loading.jl:355
 in eval(::Module, ::Any) at ./boot.jl:267

This error can be fixed by adding import Base.minmax to the file ufixed.jl. (I don't want to deal with git right now.)

Tab-completion REPL crash

I commented on the issue #8199 and found the source of a REPL crash in the ufixed.jl
file. Lines 156-165 somehow interfere with a base UnionType.

My error is:

julia> using FixedPointNumbers

julia> writedlm(ERROR: type UnionType has no field body
 in show at show.jl:80 (repeats 2 times)
 in print_to_string at ./string.jl:24
 in argtype_decl at methodshow.jl:18
 in arg_decl_parts at methodshow.jl:30
 in show at methodshow.jl:36
 in print_to_string at string.jl:24
 in string at string.jl:31
 in complete_methods at ./REPLCompletions.jl:144
 in completions at ./REPLCompletions.jl:207
 in completions_3B_3707 at /home/fl/Programs/julia/usr/bin/../lib/julia/sys.so
 in complete_line at REPL.jl:280
 in complete_line at LineEdit.jl:141
 in complete_line at LineEdit.jl:139
 in anonymous at LineEdit.jl:1175
 in anonymous at LineEdit.jl:1197
 in prompt! at ./LineEdit.jl:1397
 in run_interface at ./LineEdit.jl:1372
 in run_interface_3B_3724 at /home/fl/Programs/julia/usr/bin/../lib/julia/sys.so
 in run_frontend at ./REPL.jl:819
 in run_repl at ./REPL.jl:170
 in _start at ./client.jl:399
 in _start_3B_3590 at /home/fl/Programs/julia/usr/bin/../lib/julia/sys.so

After commenting out the show function in ufixed.jl, i don't receive that
error anymore.

# Show
function show(io::IO, x::Ufixed)
    print(io, "Ufixed", nbitsfrac(typeof(x)))
    print(io, "(")
    showcompact(io, x)
    print(io, ")")
end
showcompact(io::IO, x::Ufixed) = show(io, round(convert(Float64,x), iceil(nbitsfrac(typeof(x))/_log2_10)))

show{T<:Ufixed}(io::IO, ::Type{T}) = print(io, "Ufixed", nbitsfrac(T))

My julia version:

julia> versioninfo()
Julia Version 0.4.0-dev+5127
Commit 6277015* (2014-09-05 03:57 UTC)
DEBUG build
Platform Info:
  System: Linux (x86_64-unknown-linux-gnu)
  CPU: Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz
  WORD_SIZE: 64
  BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Sandybridge)
  LAPACK: libopenblas
  LIBM: libopenlibm
  LLVM: libLLVM-3.3

I checked if this error is reproducible in the stable .3 release: it is.

Test fails on 32-bit Windows 7

Running test in Julia v0.3:

julia> Pkg.test("FixedPointNumbers")
INFO: Testing FixedPointNumbers
ERROR: test failed: one(T) == 1
while loading \.julia\v0.3\FixedPointNumbers\test\ufixed.jl, in expression starting on line 20
while loading \.julia\v0.3\FixedPointNumbers\test\runtests.jl, in expression starting on line 2
...
ERROR: FixedPointNumbers had test errors

julia> one(Ufixed8)
Ufixed8(1.0)

julia> ans == 1
false

However, the following code works as expected:

julia> f() = Ufixed8(1.0) == 1
f (generic function with 2 methods)

julia> f()
true

julia> apply(==, promote(one(Ufixed8), 1))
true

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.