Comments (6)
Also the fit-prediction to data comparison of CDHSW_FW
showed similar problematic features to those datasets (indeed, not the CDHSW_DXDYNU
datasets). Did you check the agreement with yadism
for that dataset?
from nnusf.
Also the fit-prediction to data comparison of
CDHSW_FW
showed similar problematic features to those datasets (indeed, not theCDHSW_DXDYNU
datasets). Did you check the agreement withyadism
for that dataset?
For CDHSW_FW
, not yet! But, indeed, this dataset is also a linear combination of the SFs and hence relies on the coefficients. So, we'd expect that the coefficients are also wrong there.
from nnusf.
As far I can see CDHSW_FW
looks correct.
[20:17:40] INFO x = 0.015 compare_to_data.py:59
INFO Q2 data yadism ratio compare_to_data.py:60
0 0.19 0.559 0.148213 0.265139
1 0.25 0.633 0.236481 0.373587
2 0.33 0.690 0.353411 0.512189
3 0.43 0.740 0.493486 0.666873
4 0.56 0.868 0.678475 0.781653
5 0.72 0.926 0.866527 0.935774
6 0.94 0.985 1.028998 1.044668
7 1.22 1.057 1.114885 1.054763
8 1.59 1.159 1.132274 0.976940
9 2.06 1.260 1.195974 0.949185
10 2.68 1.314 1.117653 0.850573
11 3.48 1.455 1.189008 0.817188
12 4.53 1.711 1.258109 0.735306
[20:17:41] INFO x = 0.045 compare_to_data.py:59
INFO Q2 data yadism ratio compare_to_data.py:60
13 0.58 0.835 0.790557 0.946775
14 0.76 0.890 0.969074 1.088847
15 0.99 0.963 1.075445 1.116765
16 1.28 1.043 1.104062 1.058544
17 1.67 1.142 1.172488 1.026697
18 2.17 1.219 1.240957 1.018013
19 2.82 1.284 1.227106 0.955690
20 3.66 1.281 1.270865 0.992088
21 4.76 1.446 1.268859 0.877496
22 6.19 1.401 1.296128 0.925145
23 8.04 1.477 1.327977 0.899104
24 10.45 1.491 1.363223 0.914301
25 13.59 1.388 1.389301 1.000937
INFO x = 0.08 compare_to_data.py:59
INFO Q2 data yadism ratio compare_to_data.py:60
26 1.04 1.031 1.143668 1.109281
27 1.35 1.067 1.176199 1.102342
28 1.75 1.104 1.200205 1.087142
29 2.28 1.159 1.214579 1.047954
30 2.96 1.220 1.224775 1.003914
31 3.85 1.299 1.262303 0.971749
32 5.01 1.332 1.289804 0.968321
33 6.51 1.354 1.310404 0.967802
34 8.46 1.408 1.297881 0.921791
35 11.00 1.440 1.311558 0.910804
36 14.30 1.440 1.323445 0.919059
37 18.59 1.483 1.335629 0.900626
38 24.16 1.481 1.344160 0.907603
[20:17:42] INFO x = 0.125 compare_to_data.py:59
INFO Q2 data yadism ratio compare_to_data.py:60
39 1.62 1.050 1.186746 1.130234
40 2.11 1.102 1.203163 1.091799
41 2.74 1.125 1.201308 1.067830
42 3.56 1.136 1.202408 1.058458
43 4.63 1.178 1.219304 1.035063
44 6.02 1.241 1.236224 0.996152
45 7.82 1.290 1.247604 0.967135
46 10.17 1.259 1.253839 0.995901
47 13.22 1.300 1.238136 0.952412
48 17.18 1.293 1.238168 0.957593
49 22.34 1.286 1.237516 0.962298
50 29.04 1.255 1.243531 0.990862
51 37.75 1.310 1.238886 0.945715
INFO x = 0.175 compare_to_data.py:59
INFO Q2 data yadism ratio compare_to_data.py:60
52 2.27 1.017 1.157395 1.138049
53 2.95 1.069 1.150525 1.076263
54 3.83 1.075 1.147126 1.067094
55 4.98 1.071 1.139045 1.063534
56 6.48 1.092 1.141197 1.045052
57 8.42 1.139 1.144075 1.004456
58 10.95 1.132 1.140974 1.007928
59 14.24 1.131 1.136088 1.004499
60 18.51 1.146 1.118766 0.976235
61 24.06 1.149 1.108894 0.965095
62 31.28 1.118 1.106155 0.989405
63 40.66 1.102 1.096351 0.994874
64 52.86 1.019 1.085710 1.065466
INFO x = 0.225 compare_to_data.py:59
INFO Q2 data yadism ratio compare_to_data.py:60
65 2.92 0.967 1.062504 1.098763
66 3.79 0.949 1.055086 1.111787
67 4.93 0.971 1.044523 1.075719
68 6.41 0.974 1.029319 1.056795
69 8.33 0.979 1.023388 1.045340
70 10.83 0.988 1.015518 1.027852
71 14.08 0.994 1.005303 1.011371
72 18.30 0.977 0.993911 1.017309
73 23.79 0.994 0.973783 0.979660
74 30.93 0.987 0.965081 0.977792
75 40.21 0.954 0.951739 0.997630
76 52.27 0.942 0.937733 0.995470
77 67.96 0.938 0.924098 0.985179
[20:17:43] INFO x = 0.275 compare_to_data.py:59
INFO Q2 data yadism ratio compare_to_data.py:60
78 3.56 0.862 0.942244 1.093091
79 4.63 0.847 0.929586 1.097504
80 6.03 0.877 0.914569 1.042838
81 7.83 0.838 0.896386 1.069674
82 10.18 0.845 0.884136 1.046314
83 13.24 0.853 0.871580 1.021782
84 17.21 0.830 0.857103 1.032654
85 22.37 0.836 0.842152 1.007359
86 29.08 0.804 0.825377 1.026589
87 37.81 0.796 0.809191 1.016571
88 49.15 0.805 0.794789 0.987316
89 63.89 0.802 0.780509 0.973204
90 83.06 0.824 0.765372 0.928849
INFO x = 0.35 compare_to_data.py:59
INFO Q2 data yadism ratio compare_to_data.py:60
91 4.54 0.690 0.740328 1.072939
92 5.90 0.675 0.723330 1.071600
93 7.67 0.654 0.705234 1.078339
94 9.97 0.655 0.686169 1.047586
95 12.96 0.633 0.670058 1.058543
96 16.85 0.634 0.654740 1.032713
97 21.90 0.615 0.638873 1.038818
98 28.47 0.614 0.626153 1.019794
99 37.01 0.612 0.610200 0.997059
100 48.12 0.591 0.592128 1.001909
101 62.55 0.567 0.577055 1.017733
102 81.32 0.560 0.563209 1.005730
103 105.70 0.574 0.549302 0.956971
INFO x = 0.45 compare_to_data.py:59
INFO Q2 data yadism ratio compare_to_data.py:60
104 5.83 0.480 0.486608 1.013766
105 7.58 0.448 0.469314 1.047576
106 9.86 0.431 0.451858 1.048394
107 12.82 0.413 0.434140 1.051188
108 16.66 0.397 0.419092 1.055647
109 21.66 0.381 0.404813 1.062503
110 28.16 0.370 0.393073 1.062361
111 36.61 0.366 0.379367 1.036521
112 47.59 0.352 0.366173 1.040264
113 61.86 0.335 0.352604 1.052549
114 80.42 0.330 0.341158 1.033812
115 104.50 0.322 0.329888 1.024496
116 135.90 0.308 0.319458 1.037202
[20:17:44] INFO x = 0.55 compare_to_data.py:59
INFO Q2 data yadism ratio compare_to_data.py:60
117 7.13 0.294 0.281345 0.956954
118 9.27 0.283 0.266741 0.942547
119 12.05 0.250 0.252965 1.011862
120 15.66 0.234 0.240068 1.025931
121 20.36 0.220 0.228601 1.039095
122 26.47 0.207 0.220593 1.065666
123 34.42 0.198 0.210030 1.060759
124 44.74 0.187 0.200273 1.070977
125 58.16 0.176 0.191254 1.086673
126 75.61 0.173 0.182533 1.055107
127 98.29 0.162 0.174807 1.079058
128 127.80 0.155 0.167597 1.081269
129 166.10 0.157 0.160888 1.024766
I'll open a PR on Yadism to fix the other two cross sections according to the experimental paper definition,
which apparently is different from other dataset of the same collaborations.
from nnusf.
Thanks a lot @giacomomagni for these numbers. It does indeed seem that CDHSW_FW
looks correct. Let's first see how the NN predictions change once the coefficients for the CHORUS
and NUTEV
cross sections are fixed.
from nnusf.
With the new grids, the NUTEV
cross sections are much better (@giacomomagni). The missing normalization was indeed the problem.
from nnusf.
Resolved by #17
from nnusf.
Related Issues (20)
- Collect various analyses plots HOT 9
- Some improvements we may want to make to the code HOT 1
- Add A=1 boundary conditions HOT 3
- Add option for log frequency of chi2 history to runcard
- Matching for high-Q2 and A=1 does not properly work
- Potential issue with (quoted) Validation chi2 for CDHSW_F2, CHORUS_F2, NUTEV_F2 HOT 3
- Add the possibility to include a Docker container
- Quote experimental chi2 of real data in report summary
- Fix github workflows
- Improve predictions in the small-x extrapolation region HOT 1
- Add theory covmat to Yadism data
- Address potential overfitting wrt matching data HOT 8
- Implement pdf+th error for yadism grids HOT 8
- Re-compute Yadism theory with FFNS5 @ NLO
- Add module that corrects isoscalarity for a given PDF set HOT 1
- Tests to be carried out for the paper once final fit is ready
- Make the Code public
- PyPI package is broken
- Computation of GLS sum rules broke due to changes in EKO HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from nnusf.