Coder Social home page Coder Social logo

Comments (7)

jishnub avatar jishnub commented on June 20, 2024

Is a few ns really a concern?

from julia.

Moelf avatar Moelf commented on June 20, 2024

Yes to some degree, we routinely fill thousands of histograms with hundreds of millions of entries.

from julia.

mikmoore avatar mikmoore commented on June 20, 2024

Fiddling with the source code, I can reduce the runtime by half by replacing a[n] access with another variable (like h). So it appears a significant amount of runtime is devoted to materializing a[n] with its TwicePrecision step.

Note that this issue is a tad misleading. The slow part is accurately computing a[n] (which any accurate method must sometimes do, for tie-breaking purposes), which the collect offloads from the benchmarking here. So we can't be faster than a[n]. This can simply be faster when it's merely a value lookup rather than a calculation.

Which is to say that any significant improvement will require either that a[n] be faster or that we do it less often. I doubt it can be much faster (and still be right) as it's unlikely it was written poorly in the first place. It could be done less often if we used a finer-grained rounding so that we can avoid the check in non-borderline cases.

Here is a toy concept of checking a[n] less often. There may be off-by-one style errors in this implementation -- I was looking at the run-speed concept rather than ensuring it was definitely always correct. Also, for very long arrays (when nc > maxintfloat(T) the original (essentially with c=1) maybe does (I'd have to think harder) and this implementation (c>1) definitely does risk roundoff error. So larger c increases the regime where roundoff error is possible.

function dev_searchsortedlast(a::AbstractRange{<:Real}, x::Real)::keytype(a)
    o = Base.Order.Forward # should be an input
    Base.require_one_based_indexing(a)
    f, h, l = first(a), step(a), last(a)
    if Base.Order.lt(o, x, f)
        0
    elseif !Base.Order.lt(o, x, l) || h == 0
        length(a)
    else
        c = 2^3
        nc = round(Int, (x - f) / h * c, RoundNearest)
        n,r = fldmod(nc, c)
        # if r==0, we are on the border between bins and n+1 might be too big
        iszero(r) && Base.Order.lt(o, x, a[n+1]) ? n : n+1
    end
end

But it only increases throughput by about 20% for me. Finer discretization (larger c) didn't improve things noticeably more. Personally, I'm not completely sold that this approach is worth it.

julia> using BenchmarkTools

julia> e1 = 0:0.1:1;

julia> @benchmark searchsortedlast($e1, x) setup=(x=rand())
BenchmarkTools.Trial: 10000 samples with 997 evaluations.
 Range (min … max):  21.264 ns … 77.232 ns  ┊ GC (min … max): 0.00% … 0.00%
 Time  (median):     21.364 ns              ┊ GC (median):    0.00%
 Time  (mean ± σ):   22.000 ns ±  3.790 ns  ┊ GC (mean ± σ):  0.00% ± 0.00%

  █▆                                                          ▁
  ███▇▆▇▆▆▄▃▃▃▃▁▃▄▃▁▃▁▃▃▃▁▃▁▁▄▁▁▃▁▃▃▁▁▁▁▄▅▇▅▅▅▄▄▄▅▃▃▁▁▃▃▃▃▆▆▅ █
  21.3 ns      Histogram: log(frequency) by time      42.2 ns <

 Memory estimate: 0 bytes, allocs estimate: 0.

julia> @benchmark dev_searchsortedlast($e1, x) setup=(x=rand())
BenchmarkTools.Trial: 10000 samples with 998 evaluations.
 Range (min … max):  16.633 ns … 85.872 ns  ┊ GC (min … max): 0.00% … 0.00%
 Time  (median):     16.834 ns              ┊ GC (median):    0.00%
 Time  (mean ± σ):   18.441 ns ±  4.441 ns  ┊ GC (mean ± σ):  0.00% ± 0.00%

  █
  █▃▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▁▂▂▂▂▂▂▂▂▂▂▂▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▃▃▂▂ ▂
  16.6 ns         Histogram: frequency by time          29 ns <

 Memory estimate: 0 bytes, allocs estimate: 0.

julia> e3 = StepRangeLen(0.0, 0.1, 11); # less precise step

julia> @benchmark searchsortedlast($e3, x) setup=(x=rand())
BenchmarkTools.Trial: 10000 samples with 998 evaluations.
 Range (min … max):  14.128 ns … 66.032 ns  ┊ GC (min … max): 0.00% … 0.00%
 Time  (median):     15.030 ns              ┊ GC (median):    0.00%
 Time  (mean ± σ):   15.812 ns ±  4.555 ns  ┊ GC (mean ± σ):  0.00% ± 0.00%

  ▇▇█▁                                                        ▂
  ████▆▄▃▃▅▆▆▇▇▇██▆▅▄▄▁▃▁▁▁▃▃▁▁▁▁▁▄▇███▇▆▆▅▅▁▄▃▄▄▇▇▇▇▇▅▅▅▃▄▅▅ █
  14.1 ns      Histogram: log(frequency) by time        39 ns <

 Memory estimate: 0 bytes, allocs estimate: 0.

julia> @benchmark dev_searchsortedlast($e3, x) setup=(x=rand())
BenchmarkTools.Trial: 10000 samples with 999 evaluations.
 Range (min … max):   9.309 ns … 71.071 ns  ┊ GC (min … max): 0.00% … 0.00%
 Time  (median):      9.610 ns              ┊ GC (median):    0.00%
 Time  (mean ± σ):   10.699 ns ±  2.950 ns  ┊ GC (mean ± σ):  0.00% ± 0.00%

   ▆█ ▄
  ▅██▆█▆▂▂▂▂▂▂▂▂▂▂▂▂▂▁▂▁▁▁▁▁▁▂▁▁▁▁▁▁▁▁▁▁▁▁▁▂▁▁▁▁▁▁▁▂▁▁▁▁▁▂▂▅▅ ▃
  9.31 ns         Histogram: frequency by time        16.9 ns <

 Memory estimate: 0 bytes, allocs estimate: 0.

Note the tests with e3, which show the benefit of a range with less step precision (compare e1.step to e3.step).

from julia.

Moelf avatar Moelf commented on June 20, 2024

wait why do we need to look up a[n]? my understanding is for uniform binning you just need:
https://github.com/Moelf/FHist.jl/blob/160d675455a9e40a909e3f97d15a3f9a6c5e0659/src/polybinedges.jl#L45

where the inv_step is just a pre-computed inv(step(range))

from julia.

mikmoore avatar mikmoore commented on June 20, 2024

wait why do we need to look up a[n]?

In infinite precision you can use simple arithmetic like FHist.jl attempts to do. But these are floating point numbers with finite precision. Let's apply the algorithm linked from FHist.jl to the following case:

julia> v1 = 0.0:0.2:1
0.0:0.2:1.0

julia> (step(v1), v1.step) # notice that `step(v1)` does not give the full info
(0.2, Base.TwicePrecision{Float64}(0.19999999999999973, 2.6645352591003756e-16))

julia> collect(v1)
6-element Vector{Float64}:
 0.0
 0.2
 0.4
 0.6
 0.8
 1.0

julia> (0.6 - first(v1)) / step(v1) # == 0.6 / 0.2
2.9999999999999996

julia> floor(Int, (0.6 - first(v1)) / step(v1)) + 1 # wrong answer
3

julia> (0.6 - first(v1)) * inv(step(v1)) # note: `inv(step())` gets lucky in this case, but is less accurate in general
3.0

julia> searchsortedlast(v1, 0.6) # right answer
4

Notice that the calculation in FHist.jl actually got the answer right if we used * inv(step). But this was pure luck. In general, this will be less accurate and more prone to mistakes. Repeat the above with a different range. Notice that this range has a step that is represented exactly, so this should be easier.

julia> v2 = 0.0:49.0:196.0
0.0:49.0:196.0

julia> (step(v2), v2.step) # the `step` is represented exactly in just a Float64
(49.0, Base.TwicePrecision{Float64}(49.0, 0.0))

julia> (3*49.0 - first(v2)) / step(v2) + 1 # will give the correct answer
4.0

julia> (3*49.0 - first(v2)) * inv(step(v2)) + 1 # will give the wrong answer
3.9999999999999996

Ultimately, these finite precision issues mean that it is very difficult (expensive) to get the answer definitely correct in all cases from just the start and step. To get it right, it's safer and not-more expensive to simply check against a value in the collection to see if you got the answer right.

from julia.

Moelf avatar Moelf commented on June 20, 2024

sigh, right, I remember it all now, this is the trade-off of our range objects being more accurate. Thanks.

from julia.

LilithHafner avatar LilithHafner commented on June 20, 2024

(as expected, the O(1) method outperforms the O(log(n)) method on larger inputs:

julia> e1 = 0:0.00001:1;

julia> @benchmark searchsortedlast($e1, x) setup=(x=rand())
BenchmarkTools.Trial: 10000 samples with 999 evaluations.
 Range (min  max):  8.258 ns  13.847 ns  ┊ GC (min  max): 0.00%  0.00%
 Time  (median):     8.341 ns              ┊ GC (median):    0.00%
 Time  (mean ± σ):   8.419 ns ±  0.273 ns  ┊ GC (mean ± σ):  0.00% ± 0.00%

      ▄   █                                                   
  ▃▁▁▁█▁▁▁█▁▁▁▃▁▁▁▂▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▂▁▁▁▂▁▁▁▅▁▁▁▄▁▁▁▂ ▂
  8.26 ns        Histogram: frequency by time        8.84 ns <

 Memory estimate: 0 bytes, allocs estimate: 0.

julia> e2 = collect(0:0.00001:1);

julia> @benchmark searchsortedlast($e2, x) setup=(x=rand())
BenchmarkTools.Trial: 10000 samples with 986 evaluations.
 Range (min  max):  52.104 ns  83.883 ns  ┊ GC (min  max): 0.00%  0.00%
 Time  (median):     57.809 ns              ┊ GC (median):    0.00%
 Time  (mean ± σ):   56.245 ns ±  2.712 ns  ┊ GC (mean ± σ):  0.00% ± 0.00%

   ▆▇▂▃▂ ▂                      ▄        ▇█▄▅▃▁▃▂             ▂
  ▆████████▆▇▅▇▆▆▆▄▅▅▄▄▃▃▃▁▁▁▁▄▇████▅▇▆▅▇██████████▇▇█▇▇▆▆▄▅▅ █
  52.1 ns      Histogram: log(frequency) by time      60.6 ns <

 Memory estimate: 0 bytes, allocs estimate: 0.

from julia.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.