Coder Social home page Coder Social logo

lhcmask's People

Contributors

chernals avatar freddieknets avatar giadarol avatar pbelange avatar rdemaria avatar skostogl avatar sterbini avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

lhcmask's Issues

Issue with error routines for Run 3

When enable_imperfections or enable_knob_synthesis are activated in the run3 pymask, submodule_04e_s1_synthesize_knobs.madx calls errors/HL-LHC/corr_MB_ats_v4. This routine redefines the knobs used for matching the tunes and the control of the coupling. The new expressions of the knobs can be found in temp/MB_corr_setting.mad (see following png).

Selection_240

while the initial expressions of the knobs from the optics :

optics

As a result, the following knobs are disconnected

    'qknob_1': {'lhcb1': 'dQx.b1_sq',  'lhcb2':'dQx.b2_sq'},
    'qknob_2': {'lhcb1': 'dQy.b1_sq',  'lhcb2':'dQy.b2_sq'},
    'cmrknob': {'lhcb1': 'CMRS.b1_sq',  'lhcb2':'CMRS.b2_sq'},
    'cmiknob': {'lhcb1': 'CMIS.b1_sq',  'lhcb2':'CMIS.b2_sq'},

and the matching of the tune and coupling correction fails

Selection_241

generate_sixtrack_input error when removing LR from bb_dfs

Hello,

When the user selects only the HO BB lenses in the bb_dfs dataframe and removes all LR, an error occurs when generating the BEAM expert block in fc.3 (generate_sixtrack_input). Although there are workarounds, the possibility that sxt_df_4d is empty should also be considered.

Thanks :)

Renaming of repositories

In light of Massimo's recent comments, that the tools could be made to be applicable to several machines, two steps should be taken:

  • move central repositories from /afs/cern.ch/eng/lhc/optics/simulation_tools to /afs/cern.ch/eng/tracking-tools (Massimo is getting the adequate rights for this)

  • the repositories on github should be renamed to show their non-dependence on the LHC:
    lhcmask -> trackingmask
    lhcerrors -> trackingerrors
    lhctoolkit -> trackingtools
    lhcmachines -> trackingmachines

    (alternative names are welcome)

This is not trivial as - as far as I understood - the forks need to be adapted as well.
But exactly for this reason we shouldn't postpone it too long.

Unit of %EMIT_BEAM

In SixDesk, %EMIT_BEAM is given in µm.

This means that in the mask this needs to be accounted for, either in the main mask file or in module_01.
I propose to do it in module_01, such that the user specifies the emittance in micrometers (which is also how we do it colloquially).

XMA and YMA in the BB lenses generated by python (only in MADX)

Hi,
there seems to be a bug if one uses the BB lenses installed in MADX the the python BB modules
NOTA BENE

  • no effect on SixTrack input which is correct (GOOD)
  • BUT if you compute footprints/tracking in MADX with BB there is an issue (BAD).

The incriminated line is

eattributes(np.sqrt(x['other_Sigma_11']),np.sqrt(x['other_Sigma_33']),

The point is that XMA/YMA is wrongly referred to the weak beam orbit instead of to the reference orbit (aka ideal orbit in the MAD-X manual).
Investigations are on going for the fix:

  • check with the new python BB functions that the separations are the same in MAD lenses and SixTrack input (wrongly).
  • check with legacy BB macros the XMA/YMA are "half" of the SixTrack separations (correctly).

Automatic checks that could be implemented in pymask

In general it is not sufficient to do checks only when the optics is produced because:

  • The problem could be dependent of the working point (Q, Q', coupling, cobined use of knobs)
  • There is a makethin performed by the user (we have seen issues appearing only on thin model)
  • In general, the workflow is quite involved between the load of the optics and the final model that is tracked. It is advisable to do the checks downstream.

Ideally I would do the tests at the very end of pymask, but:

  • Final orbit rematch brakes the knobs (should be modified)

  • We have only one sequence at that point (difficult to compute separations or luminosities). Maybe we could always setup both beams (2 mad instances)? The overhead would be significant (but we could use 2 cores)

  • In the presence of machine imperfections, we might have to open the tolerances quite a bit

If needed, we could have two sets of tests:

  • A light one executed at each run
  • A heavier one that we run when we make significant modifications (change of optics, changes in the pymask code)
    • We could have both sequences setup

As much as possible we should make automatic tests, not relying on visual inspection by the user.

  • How do we log the results of the tests? A json? --> we can edit the input dictionary and dump it at the end.

Automatic checks for knob behaviour

  • Riccardo should already have something that can be reused?
  • Check that knobs are indeed knobs (guido's approach), done before the makethin
  • Do the checks in beam sigmas (easier to set thresholds)

When doing parametric scans it is good to produce some "color-plots" a la Sofia with final values from twiss (with bb on/off)

  • Q, Q', coupling (but we should remember that these quantities are matched at end, so it is not a very robust sanity check)
  • Luminosity is a very powerful one, but cannot be done at the very end since at that point we have only one beam.
  • Other suggestions?

Errors should not pass silently:

  • Match failing while running pymask (Q, Q', coupling, closed orbit)
    • Use tar (last penalty)
  • Circuit running out of strength (I believe that the routine of Frederik clips at the maximum value)

All the above focuses on linear optics:

  • Do we need any test for non-linear properties?
  • RDTs, Q vs Dp/p, would we know how to interpret/test it?
    • Definitely we could compare B2 vs B4
  • Footprints? (using pysixtrack/sixtracklib)

The beam-beam of B4 looks reasonable but it has been never quantitatively validated.

  • Could we generate a test optics where the two beams are perfectly symmetric from a bb point of view?
    • Would mean: Perfect anti-symmetry at all IPs, same beta* in IP2 and IP8, symmetric phase advances between the IPs. Anything else?
    • Would it be enough to see the same FMA, RDTs, DA?
  • Could we generate b3? Why is this reflect script so complicated?
  • We could compare low-amplitude tune shift against formulas? Maybe also with a single bb interaction HO or LR to better understand.

In fact also the bb in B1 could profit from some more systematic checks.


Should we use also tracking in the tests (pysixtrack/sixtracklib)?

  • Footprints
  • Linear optics from tracking
  • BB tune shift from tracking

A few points that should be addressed soon:

  • Pysixtrack generation is untested (with errors logic needs to be adapted)
  • Machine imperfection part could profit of some pythonization (Run 3 never simulated with errors)
    • Coupling knob synthesis should be easy to port

No b2 available in the modes of the config

Would like to be able to have a 'b2_with_bb' and 'b2_without_bb' mode in order to have the MAD-X Survey and Twiss in the same reference frame as for b1. Whilst it is not a physical mode, it is useful for some calculations for example calculating the beam-beam interaction geometry.

Problem with tracking_tools: auto in config.yaml

Now that the tracking tools have no longer to be found in /afs/cern.ch/eng/tracking-tools it would be useful to write in the README that a Pymask installation is required inside the lhcmask directory to find the tracking tools.

Coupling correction/measurement

I would check that the integer and fractional tunes are integer and fractional (just to be a bit more robust, otherwise wrong coupling may appear).

Example:

def coupling_measurement(mad, qx_integer, qy_integer,
        qx_fractional, qy_fractional,
        tune_knob1_name, tune_knob2_name,
        sequence_name, skip_use):

    import numpy as np # This could be omitted  in the package
    assert isinstance(qx_integer, int), f"ERROR: qx_integer={qx_integer} should be an integer."
    assert isinstance(qy_integer, int), f"ERROR: qy_integer={qy_integer} should be an integer."
    assert 0<=qx_fractional<=1, f"ERROR: qx_fractional={qx_fractional} should be >=0 and <=1."
    assert 0<=qy_fractional<=1, f"ERROR: qy_fractional={qy_fractional} should be >=0 and <=1."

    print('\n Start coupling measurement...')
    if not skip_use:
        mad.use(sequence_name)

    # Store present values of tune knobs
    init_value = {}
    for kk in [tune_knob1_name, tune_knob2_name]:
        try:
            init_value[kk] = mad.globals[kk]
        except KeyError:
            print(f'{kk} not initialized, setting 0.0!')
            init_value[kk] = 0.


    # Try to push tunes on the diagonal
    qmid=(qx_fractional + qy_fractional)*0.5;
    qx_diagonal = qx_integer + qmid
    qy_diagonal = qy_integer + qmid
    mad.input(f'''
        match;
        global, q1={qx_diagonal},q2={qy_diagonal};
        vary,   name={tune_knob1_name}, step=1.E-9;
        vary,   name={tune_knob2_name}, step=1.E-9;
        lmdif,  calls=50, tolerance=1.E-10;
        endmatch;
    ''')

    # Measure closest tune approach
    mad.twiss()
    qx_tw= mad.table.summ.q1
    qy_tw= mad.table.summ.q2
    cta = float(np.abs(2*(qx_tw-qy_tw)-np.round(2*(qx_tw-qy_tw)))/2)


    # Restore intial values of tune knobs
    for kk in [tune_knob1_name, tune_knob2_name]:
        mad.globals[kk] = init_value[kk]

    print('\n Done coupling measurement.')

    return float(cta)

I noted that if one compare the globals before and after the measurement the variable tar (corresponding to the final penalty function of the matching, target) changed.

mad.globals['tar']=0 # to give an initial state
globals_before=deepcopy(dict(mad.globals))
coupling_measurement(mad, qx_integer=62, qy_integer=60, 
                     qx_fractional=.313, qy_fractional=.318, 
                     tune_knob1_name='dqx.b1_sq',
                     tune_knob2_name='dqy.b1_sq',
                     sequence_name='lhcb1',skip_use=False)
globals_after=deepcopy(dict(mad.globals))
print('The following variables are created:')
print(list(set(globals_after)-set(globals_before)))
print('The following variables are modified:')
print([i for i in globals_after.keys() if globals_after[i]!= globals_before[i]])

Some refactoring to be done

linear_normal_form.py should be moved to xpart.

Methods find_closed_orbit_from_tracker and compute_R_matrix_finite_differences could go to xtrack

Check det(R) == 1 and symplectification should go in compute_linear_normal_form

_set_orbit_dependent_parameters_for_bb ported to xfields, should be removed from pymask. NB: function from xsuite does not re-enable the bb lenses, this needs still to be done by pymask (explicitly)

I would like to remove check against sixtrack from xline example, can use paymaster test for this. Needs to be extended with ebe functionalities.

Error calling tracker.compute_one_turn_matrix_finite_differences()

Error running 000_pymask.py returning the following:

File ~/Apps/lhcmask/pymask/pymasktools.py:456, in generate_xsuite_line(mad, seq_name, bb_df, optics_and_co_at_start_ring_from_madx, folder_name, skip_mad_use, prepare_line_for_xtrack, steps_for_finite_diffs)
    452 xf.configure_orbit_dependent_parameters_for_bb(tracker,
    453                    particle_on_co=particle_on_tracker_co)
    455 _disable_beam_beam(tracker.line)
--> 456 RR_finite_diffs = tracker.compute_one_turn_matrix_finite_differences(
    457         particle_on_tracker_co,
    458         **steps_for_finite_diffs)
    459 _restore_beam_beam(tracker.line)
    462 (WW_finite_diffs, WWInv_finite_diffs, RotMat_finite_diffs
    463         ) = xp.compute_linear_normal_form(RR_finite_diffs)

TypeError: compute_one_turn_matrix_finite_differences() got an unexpected keyword argument 'dx'

The error is tracked down to

RR_finite_diffs = tracker.compute_one_turn_matrix_finite_differences(
particle_on_tracker_co,
**steps_for_finite_diffs)

where **steps_for_finite_diffs should be replaced by steps_for_finite_diffs, i.e. the function expects a dictionary, not an unpacked version of it. See :
https://github.com/xsuite/xtrack/blob/b090594d7564fec044361aeee29a661308ea6803/xtrack/twiss_from_tracker.py#L54-L68

Importing the warnings package is missing

Hi,

On the "optics_specific_tools.py" file, it is possible to skip the make thin of the sequence by setting:

sliceFactor=0

This will trigger the following:

else:
        warnings.warn('The sequences are not thin!')

However this will lead to an error as the warnings python package is not imported anywhere (at least to my knowledge).

Simply adding:

import warnings

in the "optics_specific_tools.py" file solves this issue. But I think it would be worth having it by default in order to avoid this small error.

Thanks a lot.

EOS links

Hi,

following the discussion on the last BB meeting I think we could provide a simple way to switch between AFS/EOS with two modifications

  1. in the template mask adding this commented line
system, "ln -fns /afs/cern.ch/eng/lhc/optics/simulation_tools simulation_tools";
! system, "ln -fns /eos/project/a/abpdata/lhc/optics/simulation_tools simulation_tools";
  1. in the /afs/cern.ch/eng/lhc/optics/simulation_tools/machines/optics/ I would put the actual copies of the files (adding a copyOptics.sh). In fact the present links will not work in EOS. In addition having copies, will allows for tracking the changes (but requires more work).

G.

Ions pymask

For ions the charge of the strong beam (BB lenses) should be multiplied by the Z of the strong beam species (for proton is irrelevant since Z=1).
This can be done with on_bb_charge but we would like to propose to add explicitly the convenient parameters in the config.py (for example the rest mass and charge of the weak beam and the charge of the strong beam).

Modularity

To ensure modularity, we need at least the following:

  • modules 1, 2, 3 end with a USE (not in the modules after as it will destroy the errors)
  • all modules should end with orbit, hence a crossing_restore

Other requirements that you think of can be added to this issue as comments.

Check consistency between optics and sequence

Hi,

one could consider to add in the sequence and in the optics the variables

  • seq_version
  • opt_version,
    respectively.
    Starting from that one could check/assert the consistency between the files.
    G.

MADX versions

Hi,

making tests of the examples (examples/hl_lhc_collision) I ran it

  1. from my machine NUCBE16940 with MAD-X 5.05.02 (64 bit, Linux) compiled locally
  2. from my machine NUCBE16940 with MAD-X 5.05.02 (64 bit, Linux) obtained from lxplus at /afs/cern.ch/user/m/mad/bin/madx
  3. from lxplus with MAD-X 5.05.02 (64 bit, Linux) obtained from lxplus at /afs/cern.ch/user/m/mad/bin/madx

All the 3 outputs are different (by "small" but "unexpected" amount). And btw are significant different from the output of the repository in /afs/cern.ch/eng/lhc/optics/simulation_tools/modules/examples/hl_lhc_collision/
(obtain from Gianni's machine using the /afs/cern.ch/user/m/mad/bin/mad but one need to verify for which version was obtained (probably non the last one).

So I suggest clarifying the numerical dependence of the MAD-X results on the platform.

Cheers,
G.

Avoiding over-crabbing is maybe not needed ?

In module_01:
!Avoid crabbing more than the crossing angle
if ( abs(on_crab1)>abs(on_x1) && on_crab1 <> 0) {on_crab1 = abs(on_crab1)/on_crab1 * abs(on_x1);}
if ( abs(on_crab5)>abs(on_x5) && on_crab5 <> 0) {on_crab5 = abs(on_crab5)/on_crab5 * abs(on_x5);}

I discussed with Gianni, and we believe that this limiting is unneeded. The user sets both the crabbing angle and the crossing angle, so he's aware what he's doing (he might even want to simulate over-crabbing on purpose, following the idea that he doesn't know the value of the crossing angle very well).

As an alternative, we might output a warning when over-crabbing.

call files from within files (example Efcomp_MQXFbody.madx)

Hey!
just a minor note, but I would like to suggest not calling (mask-external) files from within external files, as this can quickly lead down a rabbit hole.
In the LHC job_tracking.mask one could run the mask "offline" by just going through the call and readtable commands in the main mask and copy the files locally.

In the HL-LHC mask this was broken by the file slhc/errors/Efcomp_MQXFbody.madx which calls two other files at the end. Not only could these calls be directly in the main-mask, but also just copying the three files locally would not work as the calls also relied on slhc being properly linked.
This seems still to be true in this new mask version here.

Any thoughts?

Extracting the knobs of a sequence

Hi,

I would also add

def knobs_df(my_df):
    '''
    Extract the knob list of a pandas DF (it assumes that DF has a column called "knobs")
    and contanting lists
    
    Args:
        my_df: a pandas DF (it assumes that DF has a column called "knobs").
    Returns:
        A data frame of knobs.
    '''
    import itertools
    import numpy as np
    aux= list(my_df['knobs'].values)
    aux= list(np.unique(list(itertools.chain.from_iterable(aux))))
    my_dict={}
    for i in aux:
        my_dict[i]={}
        filter_df=knob_df(i, my_df)
        my_dict[i]['multeplicity']=len(filter_df)
        my_dict[i]['dependences']=list(filter_df.index)
    return pd.DataFrame(my_dict).transpose().sort_values('multeplicity', ascending=False)

to extract and sort "by-use" the knobs of a sequence or of the global space of MAD-X.
Cheers,

on_disp always 0 when enable_knob_synthesis=false

  1. on_disp is deactivated:
    https://github.com/lhcopt/lhcmask/blob/master/python_examples/run3_collisions_python/000_pymask.py#L377

  2. if enable_imperfections is False & enable_knob_synthesis is also False, exec, crossing_restore; is never called and on_disp=0 even though on_disp=1 in config.py

https://github.com/lhcopt/lhcmask/blob/master/python_examples/run3_collisions_python/000_pymask.py#L402

  1. At the moment, this is fixed by setting enable_knob_synthesis=True. Best would be to call "mad_track.input('exec, crossing_restore;')" outside of of "if configuration['enable_knob_synthesis']"

https://github.com/lhcopt/lhcmask/blob/master/python_examples/run3_collisions_python/000_pymask.py#L405

Issue with beam-beam and ion pymask

Comparing the footprints computed with MADX and SixTrack for ions, there is a good agreement without beam-beam. However, when including HO collisions in 1 IP, it seems that there is no head-on tune spread in the sixtrack footprint while it is clearly observed in the MADX footprint.
fma_sixtrack_withBB
fma_sixtrack_withoutBB

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.