Comments (8)
Why do you think they are overfitted? The training chi2 is ~1 for the yadism data and ~2 for the experimental data, which is what one would expect I think. Namely, yadism data has only one level of statistical fluctuations (that corresponding to the pseudata generation), while the experimental data has the fluctuations from psuedodata generation but this is on top of the fluctuations already present from the fact that the experimental central values are already randomly sampled values (acompanied by some possible inconsistencies due to experiment or theory that may further affect the chi2).
If you want to really test for overfitting you could of course check how the agreement with the fitted yadism data compares to the agreement with some other predictions that are not in the matching dataset, but for now these results don't really worry me too much.
from nnusf.
I was indeed expecting the
from nnusf.
Is the exp chi2 defined wrt the central value PDF or is it calculated for each PDF and then averaged? In the first case I would indeed expect it to vanish as 1/sqrt(Nrep). In the latter case I am not entirely sure what to expect, but note that also for a regular NNPDF fit the average experimental chi2 is quite a bit lower than the average test/validation losses (with a tiny contribution coming from the t0 prescription being used for exp and not tr/vl losses)
from nnusf.
Is the exp chi2 defined wrt the central value PDF or is it calculated for each PDF and then averaged? In the first case I would indeed expect it to vanish as 1/sqrt(Nrep). In the latter case I am not entirely sure what to expect, but note that also for a regular NNPDF fit the average experimental chi2 is quite a bit lower than the average test/validation losses (with a tiny contribution coming from the t0 prescription being used for exp and not tr/vl losses)
Currently, the experimental
from nnusf.
So the pseudodata by construction has chi2=1 (within stat. fluctuations). In the report it seems that somehow the chi2 defined to the central data is close to 0, while the chi2 defined to the psuedodata is close to 1. This almost gives the impression that the NN doesn't really fit the fluctuations but is rather unaffected by the level-1 noise we introduce when generating pseudodata. Not sure if this is a problem or not (maybe it is, since the uncertainties of the matching are coming from NNPDF4.0 and maybe we want to reproduce them exactly and not get smaller uncertainties), but if anything I would think it's underfitting rather than overfitting. If it is a problem, adding an additional layer of noise (so level-2 data) should place the matching data on the same footing as the real data change the posterior distribution in the matching region
from nnusf.
Hi @RoyStegeman @Radonirinaunimi I would treat yadism and real data on exactly the same footing, so also the yadism pseudo-data should be fluctuated twice wrt to the true values (like in a level 2 closure test, basically).
If we do this, we should find that after the fit, for individual replicas chi2/ndat \sim 2, while for the central prediction averaged over replicas chi2/ndat \sim 1, both for real data and for yadism pseudo-data.
I think this is the correct approach conceptually, and it is also easier to explain.
Does it make sense?
from nnusf.
I agree that, that seems to be the way to go.
This is indeed what we want to achieve:
the central prediction averaged over replicas chi2/ndat \sim 1
from nnusf.
perfect, then let;s get this done ;)
from nnusf.
Related Issues (20)
- Collect various analyses plots HOT 9
- Some improvements we may want to make to the code HOT 1
- Fix coefficients for some cross sections HOT 6
- Add A=1 boundary conditions HOT 3
- Add option for log frequency of chi2 history to runcard
- Matching for high-Q2 and A=1 does not properly work
- Potential issue with (quoted) Validation chi2 for CDHSW_F2, CHORUS_F2, NUTEV_F2 HOT 3
- Add the possibility to include a Docker container
- Quote experimental chi2 of real data in report summary
- Fix github workflows
- Improve predictions in the small-x extrapolation region HOT 1
- Add theory covmat to Yadism data
- Implement pdf+th error for yadism grids HOT 8
- Re-compute Yadism theory with FFNS5 @ NLO
- Add module that corrects isoscalarity for a given PDF set HOT 1
- Tests to be carried out for the paper once final fit is ready
- Make the Code public
- PyPI package is broken
- Computation of GLS sum rules broke due to changes in EKO HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from nnusf.