nnpdf / nnusf Goto Github PK
View Code? Open in Web Editor NEWPredictions for all-energy neutrino structure functions
Home Page: https://nnpdf.github.io/nnusf/
License: GNU General Public License v3.0
Predictions for all-energy neutrino structure functions
Home Page: https://nnpdf.github.io/nnusf/
License: GNU General Public License v3.0
Since the grids with theory uncertainties have now been generated and parset (#47) we should now add a function that construct them and add them to the already present matching covmats.
Followings are a few things that should be addressed in the view of making the code public:
It will be a nice feature to have the possibility to include a container in the on.workflow_call.inputs
in case some actions require specific programs. A status check (as illustrated here) will be required in case no container is passed.
The computation of the GLS sum rule is now broken due to type change in the coupling
class of eko.coupling
. In particular, at some point the thresholds_ratios
was of type MatchingScales
now it is a list
. Since eko
is not a direct dependency its version depends on yadism
(which now has to be v0.12.3
).
The tables below are missing:
Obs
: Qbar
)Obs
: A
, F_W
)With 42cb7e3, the NN function (especially for
There are several possibilities to address this but the easiest check would be to use a linear activation throughout the network and use
For some cross sections (CHORUS
and NUTEV
), the coefficients seem to be wrong. This would/might explain why we couldn't fit these datasets. The plots comparing the datasets (for all values of Yadism
predictions are given below:
For the record, the coefficients for the CDHSW
experiment are fine:
Currently, only the total
I was just thinking that one other thing we could try is to take one step/epoch per experiment instead of calculating the chi2 for all experiments and then taking the step. I don't really have high hopes, but perhaps this added "randomization" can prevent getting "stuck".
Thought I'd open this issue so I don't forget :). Other ideas are of course also welcome.
There seems to be some issues with the (quoted) validation
Dataset | Epoch | REP ID | ||||||
---|---|---|---|---|---|---|---|---|
NUTEV_F2 | 2316 | 35 | 58 | 1.847 | 20 | 922.098 | 78 | 8.721 |
The reason why there are
As shown above, there shouldn't be a reason for the validation
Dataset that needs to be still implemented:
F_W
, A
Qbar
It seems that the models are over-fitting on the Yadism matching data, see the following report for example. We might do something about this.
The cli option exists, but so far only calculates the pdf error
Given that we are more or less ready to run a full fit with our machine learning framework, it would be good to collect here various plots: data vs Yadism (in order to make sure that the coefficients are correct), kinematics, covariance matrices, etc.
A constraint that we should impose at the level of the fit is the boundary condition
In addition to the tests that are already listed/produced in the paper, the followings are the tests that need to be carried out once the final baselines (w/ and w/o Yadism pseudodata) are available:
As agreed for nNNPDF30_nlo_as_0118_p
that contains constraints from the LHCb data. To do so, we need to create a functionality that enforces isoscalarity on a given PDF set.
Such a constraint is enforced by replacing each PDF flavour with a linear combination of the proton- and neutron-bound PDFs as follows:
Assuming isospin symmetry the proton-bound
and
Below are a few things that we should give some thoughts beforehand. These concern some of the technical difficulties I foresee mainly in terms of implementations. There exist various solutions to approach all of them, we just need to find the most optimal ones. Hence, this thread will not only serve as a place to collect ideas, but most importantly to converge on final decisions concerning the various aspects of the implementation (design choices, etc).
Ideally, we would like to parametrize the three structure functions (n3fit
, meaning that for each measurement one specifies which structure functions are active. Better suggestion on what should be compared at the
Given that we will implement our own module(s) for the reading & parsing of the experimental datasets, we will also have to compute the covariance matrix by ourselves as we will no longer be able to rely on validphys
.
One of the technical parts we need to think about in terms of the fitting code is an easy implementation of the stopping. n3fit
has a module that could perform a custom callbacks on a PDF-like fitting model. However, this module is very complicated and contains a huge amount of things we do not want. Thus, we need something simple enough that is able to check all the various features we would like on top of tracking the history of the
An important part of the fitting is the constraints that one can impose on the structure functions both along the momentum fraction
Gross Llewellyn-Smith:
Adler:
From momentum sum rules:
The relative imports from pip install nnusf
is broken which means that the contents of commondata
and coefficients
could not be accessed. The proposed solution is the following:
In this case, the imports will be exactly the same for the development and pip-installation mode. Optionally, one can add versioning to the data and dump the version in the output metadata to make sure that everything is consistent.
In order to account for top-quark mass effects, we need to re-compute the Yadism pseudodata using the FFNS5. For this we need two classes of grids:
It will be nice to have the tests up and running again, dependent on NNPDF/workflows#6. The problem right now seems that the container registry cannot be pulled.
dataclass
objects instead of dictspatience_epochs
or patience_fraction
to the runcard. Current stopping_patience
is in epochs, which is different from n3fit
where it's a fraction, this might lead to confusiongenerate_pseudodata
boolean flag option to the fit runcardcallbacks.py
more sensiblerich
from messing with ipdb
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.