lzkelley / kalepy Goto Github PK
View Code? Open in Web Editor NEWKernel Density Estimation and (re)sampling
License: Other
Kernel Density Estimation and (re)sampling
License: Other
When trying to make a carpet plot with rotate=True, I need to add a 'shift' of 0.25 to align the x position to 0
import numpy as np
import matplotlib.pyplot as plt
import kalepy as kale
def func(xx):
zz = np.power(xx, +1.5) * np.exp(-xx)
return zz
NUM = 1e3
xx = kale.utils.spacing([1e-2, 1e1], scale='log', num=100)
yy = func(xx)
Y = np.cumsum(yy)
norm = NUM / Y[-1]
yy *= norm
Y *= norm
dydx = np.diff(Y) / np.diff(xx)
xc = kale.utils.midpoints(xx)
NREALS = 100
# NSAMP = 1e2
NSAMP = NUM
nbins = xx.size - 1
dist = np.zeros((nbins, NREALS))
wdist = np.zeros((nbins, NREALS))
for rr in range(NREALS):
ss = kale.sample_grid(xc, dydx, nsamp=NSAMP)
dist[:, rr], _ = np.histogram(ss, bins=xx)
ss, ww = kale.sample_outliers(xc, dydx, 10.0, nsamp=int(NSAMP))
ss = ss.squeeze()
# print(np.shape(ss), np.shape(ww))
wdist[:, rr], _ = np.histogram(ss, bins=xx, weights=ww)
# wdist[:, rr], _ = np.histogram(ss, bins=xx)
fig, ax = plt.subplots()
ax.set(xscale='log', yscale='log')
ax.plot(xx, yy)
# ax.plot(xc, dydx)
ave = np.mean(dist, axis=-1)
ax.plot(xc, ave, 'r--')
ave = np.mean(wdist, axis=-1)
ax.plot(xc, ave, 'b--')
plt.show()
fig.savefig('bug.png')
I think the problem is that the same total number of samples are drawn, regardless of how much total weight is included in the centroid values. For example, in this example plot, almost 1000 samples are drawn from the direct-sampling region (sides) even though a large total-weight of samples is already included in the centroid-sampled region (center).
This should definitely work for finite kernels, and could be a fair approximation even for infinite ones.
Hi, amazing package! Just noticed that the readthedocs page does not seem to display the full API of each module / class / function. E.g. this page, https://kalepy.readthedocs.io/en/latest/kalepy.html#kalepy-kde-module, is empty
Hi, thanks for this very powerful package. My analysis group and I are trying to incorporate it into one of our LHCb analyses, the results are very promising.
However, we are missing a multithreading functionality. We have to deal with many millions of events and this can take many hours or days in a single core. I managed to add myself a numba parallel range inside kernels.py/_evaluate_numba function, but would be great to have an option in the constructor or an additional method (or whatever) to do this automatically.
I wanted to use the kalepy package on a SWeighted distribution, so there are many events with negative weights and I can't normalize them as the distribution would change. There is a ValueError in kalepy/kde.py, line 214: "Invalid weights
entries, all must be finite and > 0!".
I would want to know if there is the possibility to adapt the code to run with negative weights or if the method just not allow the use of negative weights for some reason. Could someone clarify this to me?
Thanks in advance and Kind Regards.
From the JOSS,
Reflecting boundary conditions can be used to improve reconstruction accuracy. For example, with data drawn from a log-normal distribution, a standard KDE will produce ‘leakage’ outside of the domain. To enforce the restriction that f(x < 0) = 0 (which must be known {a priori}), the kernel is redefined such that KH(x < 0) = 0, and re-normalized to preserve unitarity1 . This example is shown in Figure 1, with histograms in the upper panel and KDEs on the bottom.
Isn't this then absorbing boundary conditions? Or am I missing something?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.