Coder Social home page Coder Social logo

japan's Introduction

Just Another Parity ANalyzer

Doxygen

Doxygen output can be found at: http://hallaweb.jlab.org/parity/prex/japan/Doxygen/html. This will get updated from time to time.

You can also generated your own locally (with your latest changes) by installing doxygen on your system (from the root directory of this code) and running:

  doxygen Doxyfile

Workflow

To get the repository from a remote you "clone" -> once you have the repository you "pull" to get changes from the remote repository locally -> to change branches locally you "checkout" to the other branch -> you "commit" your changes to the local repository -> you propagate your local commits to the remote repository by a "push"

To get repository

Use this if you plan to do work and want to propagate changes to the repository for others to see:

  git clone [email protected]:JeffersonLab/japan

Are you getting an error? Do you need access to the repository? Contact cipriangal, paulmking or kpaschke.

Alternately just get a copy that you just want to run (without making changes to the repository):

git clone https://github.com/JeffersonLab/japan

Building the code

Prerequisites: boost, root

mkdir build; cd build
cmake ../
make

Compiles on linux machines but has issues on Macs (see #2).

XCode

If you want to use XCode on mac use:

mkdir buildXcode
cd buildXcode
cmake -G Xcode ../

To make modifications

Before starting work make sure you have the latest changes from the remote repository:

git pull

Create a branch (see https://git-scm.com/book/en/v2/Git-Branching-Basic-Branching-and-Merging for more details on branching) for the issue/improvement you are trying to add:

git checkout -b issueName

You are now in new branch named "issueName". If you want others to see your work make sure you setup tracking of this branch on the remote repository:

git push -u origin issueName

Modfiy any file you need. For the modified files:

git add folder/modifiedFile.hh
git commit -m "Message for this commit"

At this point your code is tracked and committed on the local repository. To make changes available to others:

git push

Attaching your username for commit tracking purposes

To have your name properly tracked when committing (so that we know who is responsible for changes) please utilize the "author" tag

git commit --author=username

japan's People

Contributors

adhidevi avatar allison-zec avatar cameronc137 avatar cipriangal avatar hansenjo avatar leafybillow avatar paulmking avatar rahmans1 avatar rakithab avatar rr521111 avatar tianye8001 avatar vfowen avatar wdconinc avatar yufanchen88 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

japan's Issues

Root file refreshing is broken in panguin

Environment: (where does this bug occur, have you tried other environments)

  • branch (master for latest released): develop
  • revision (HEAD for most recent): HEAD
  • OS or system: apar@adaq3
  • Standard install of develop

Steps to reproduce: (give a step by step account of how to trigger the bug)

  1. Comment out this line (
    fRootTree[i]->Refresh();
    ) and run panguin in watchfile mode on an online-japan produced ROOT file

Expected Result: (what do you expect when you execute the steps above)

Without commenting out the refresh line I would expect to get good online plots all the time

Actual Result: (what do you get when you execute the steps above)

The default behavior erases the initially drawn information. When I comment out refresh it will at least re-draw correctly, but stale tree information instead of actually updating it. This refresh command is not working.

Execution of "enable-mapfile" option results in crashes

Environment: (where does this bug occur, have you tried other environments)

  • branch (master for latest released): develop (see below for discussion)
  • revision (HEAD for most recent): HEAD (commit 0cce610)
  • OS or system: ifarm (and adaq3, but description below is on ifarm)
  • Special ROOT or Geant4 versions? Using "JLab software" 2.3 on ifarm

Steps to reproduce: (give a step by step account of how to trigger the bug)

  • With the default configuration options, the execution crashes because TMapFile requires the ROOT "New" library which is not enabled by default.
  1. Generate a test data file using the mock data generator: ./build/qwmockdatagenerator -r 10 -e 1:10000 --config qwparity_simple.conf --detectors mock_detectors.map
  2. Trying to analyze that file crashes: ./build/qwparity --enable-mapfile -r 10 --config qwparity_simple.conf --detectors mock_detectors.map
    The result is:
    Error in TMapFile::TMapFile: no memory mapped file capability available
    Use rootn.exe or link application against "-lNew"
    ================== RealTime Producer Memory Map File =================
    Memory mapped file: /dev/shm//QwMemMapFile.map
    Title: RealTime Producer File
    Option: file closed
    ======================================================================
    QwRootFile::ConstructHistograms::detectors address 0x7ffcfc5e7f20 and its name mps_histo

*** Break *** segmentation violation

  • In the branch 'bugfix-using-TMapFile', I have modified the cmake to include the ROOT "New" library. Historically, the way to use the "New" library was to have it occur first in the linking; at the starting point of the branch, it is not liked early, it is at the end of the ROOT library linking process.
  1. Trying the same analysis results in immediate crash (although even qwparity --help crashes):
    [email protected]> ./build_new/qwparity --enable-mapfile -r 10 --config qwparity_simple.conf --detectors mock_detectors.map

*** Break *** segmentation violation

Run list (`--runlist <file>`) must contain runs in increasing order

Environment: (where does this bug occur, have you tried other environments)

  • branch (master for latest released): develop
  • revision (HEAD for most recent): HEAD
  • OS or system: gcc8 ubuntu
  • Special ROOT or Geant4 versions? ROOT 6.16.00

Steps to reproduce: (give a step by step account of how to trigger the bug)

  1. Create runlist file Parity/prminput/adc18_runlist.map with
[3970]
[3284]
[12208]
[13631]
  1. Run qwparity --runlist adc18_runlist.map --config adc18.conf

Expected Result: (what do you expect when you execute the steps above)

Expect to analyze run 12208 at some point.

Actual Result: (what do you get when you execute the steps above)

Starts with run 3970, then 3284, then searches and fails to find all runs until 3970, redoes 3970, redoes 3284, etc loop de loop.

Workaround:

Sort your runlist file! :-)

pking.feedback doesn't compile feedback correctly

Environment: (where does this bug occur, have you tried other environments)

  • branch (master for latest released): pking.feedback
  • revision (HEAD for most recent): HEAD ad878dc
  • OS or system: gcc-8, cmake 3.12.1
  • Special ROOT or Geant4 versions? 6.14

Steps to reproduce: (give a step by step account of how to trigger the bug)

  1. In main project source dir: mkdir build && cd build && cmake .. && make

Expected Result: (what do you expect when you execute the steps above)

Should build libQwFeedback.so library and qwfeedback executable.

Actual Result: (what do you get when you execute the steps above)

Doesn't seem to build anything inside the Feedback directory...

Beam trip detection doesn't do what it "ought"

After the discussion at the meeting on Wednesday, 27 March, I checked the QwEventRing for the "pre-cut" and "beam trip holdoff". They do not appear in the code, and we do not use the "kBeamTripError" flag value at all.

Looking back in the QwAnalysis development history, it appears that Qweak had determined that a stringent low-current cut and a stability cut were sufficient for the running conditions during Qweak, and during a refactoring period the beam trip detection was removed from QwEventRing (due to changes in how the event ring was being handled).

For PREX, we want to have both the "pre-trip" and "recovery-holdoff" available. Recovering these will not be a simple merge, but will instead require cut-and-paste from the QwAnalysis revisions with the latest version of that functionality, and new development work. But, the overall changes are modest and fairly self contained.

Add options to open data files by name and to specify data file path

Is your feature request related to a problem? Please describe.
You can not analyze a particular file by name or absolute/relative path; data files will be searched for by run number, and the search paths are the current working directory and the directory specified by $QW_DATA.

Describe the solution you'd like
I would like to have an option to analyze a particular file by name, instead of just by run number.

I would also like to be able to specify the data directory using a option, instead of using the environment variable.

Both of these would require changes in how QwEventBuffer finds the filename.

Generalize the way hardware channels are mapped into detector objects

Right now, when we construct a detector object, the allowed hardware channel types are basically hardcoded. It would be good if the name of the hardware channel in the channel map file could be used dynamically to assign the proper VQwHardwareChannel-derived class without having to have an explict cascade of if-statements.

Some subtle points:

  • A complex detector object (like a stripline BPM) probably should be built with only a single instrumentation type.
  • What should happen if we do arithmetic with channels which have different data-structures? All of our channels have at least a single value per helicity window, but some have oversamples. For instance, if we tried to multipy a VQWK value with a scaler value, do we "promote" the scaler to give its value in each subblock, or "demote" the result to only have the hardwaresum?
  • What would this do to throughput times? Ideally all of this should be handled once at initialization, but if we have to dynamically generate new objects where we need to determine their types during the running process, it would likely bog us down.

Documentation Request: ROOT IO of JAPAN outputs and EPICs data in PANGUIN

Is your feature request related to a problem? Please describe.
There are many ways to read and write root trees (from JAPAN or otherwise), as well as EPICs data (reading and writing - for stripcharts and alarm handling), and if we could keep all of that information together and in a concise set of examples that would be really helpful.

Describe the solution you'd like
A wiki section in here and also a set of example macros/.cfg files with ROOT IO (reading and writing new root trees I guess) and EPICS calls

Describe alternatives you've considered
Just sticking with data handler manipulation for new data and just using tree Draw() commands forever

Additional context
I like to document everything

Proposed tree names

Slightly related to issues #19 and #22.

The proposal is to rename the trees generated such that the names for each type of tree are shorter.

  • Single event tree: "Mps_Tree" -> "evt"
  • Multiplet tree: "Hel_Tree" -> "mul"
  • Pair tree (built from pairs, no matter the size of the multiplet): doesn't exist yet -> "pr"
  • EPICS and slow controls tree: "Slow_Tree" - > "slow"
  • Burst tree using short intervals; this is more of a test for MOLLER than something PREX needs, but it ought to be something which can just be switched on without much development: "Burst_Tree" -> "burst"
  • Minirun tree; this would likely be implemented using the burst mechanism with a long interval (~5min); probably we'd want to have correlation info put into this tree too: doesn't exist yet -> "mrun"
  • Corrected trees; a first-pass corrected tree would be associated to an uncorrected tree; the names would be formed by adding a "c" to the end of the uncorrected tree. If we have several types of corrections, we may need to think about this more: doesn't exist yet -> "mulc"

Feature: Development of "mini runs"

Is your feature request related to a problem? Please describe.
For many time-dependent effects, such as determining correlations, the natural hour-long length of data files is too long. Also, flagging a full data file as being good/bad/suspect does not give us adequate granularity for flagging problems.

Describe the solution you'd like
A software option to enable output of results (in some form) accumulated for a certain number of patterns, forming a "mini-run". A minirun should have mean, RMS, error, and number of good entries for the yields, asymmetries, and diffs of "all" quantities. Correlation matrices should also be determined for the minirun.

A minirun which has "too few" events should be combined (or combinable) with an adjacent minirun, so we do not have orphan events.

Should a minirun be a fixed time, or should it be a fixed number of "good events"?

Minirun outputs could go into a special tree, or have the existing trees broken into segments within their ROOT file.

The existing mulitplet tree should have a variable indicating the minirun index for a particular event.

Describe alternatives you've considered
We could have CODA break datafiles at much smaller sizes to create file segments that each last ~5 minutes (or however long we want as a minirun). We'd then analyze each file as an independent unit. It is less flexible, than using a software minirun length though.

Restrict push access for users "Your Name" on adaq3

Is your feature request related to a problem? Please describe.
It is currently possible to checkout the repository over https (without authentication), commit locally without valid email address, and push to github over https (again without authentication).

Describe the solution you'd like
Require users to commit with valid user.name and user.email. Require users to authenticate when pushing.

Describe alternatives you've considered
On shared systems we can't set git config --global user.name.

ROCID_t and BankID_t unsigned but often assigned -1 or compared to -1

Environment: (where does this bug occur, have you tried other environments)

  • branch (master for latest released): develop
  • revision (HEAD for most recent): HEAD
  • OS or system: gcc8
  • Special ROOT or Geant4 versions?

Steps to reproduce: (give a step by step account of how to trigger the bug)

The typedefs ROCID_t and BankID_t in QwTypes.h (

typedef UInt_t ROCID_t;
) are defined as UInt_t and ULong64_t, but often assigned as -1. E.g.

$ grep fCurrentROC Analysis/src/*
Analysis/src/VQwSubsystem.cc:  fCurrentROC_ID    = -1;
Analysis/src/VQwSubsystem.cc:    fCurrentROC_ID    = roc_id;
Analysis/src/VQwSubsystem.cc:    fCurrentROC_ID    = -1;
Analysis/src/VQwSubsystem.cc:  if (fCurrentROC_ID != -1) {
Analysis/src/VQwSubsystem.cc:    stat = RegisterROCNumber(fCurrentROC_ID, bank_id);
Analysis/src/VQwSubsystem.cc:    fCurrentROC_ID  = -1;
Analysis/src/VQwSubsystem.cc:  if (fCurrentROC_ID != -1){
Analysis/src/VQwSubsystem.cc:    Int_t roc_index = FindIndex(fROC_IDs, fCurrentROC_ID);
Analysis/src/VQwSubsystem.cc:    fCurrentROC_ID  = -1;

Not sure if this is well-defined or desirable... It does generate a fair number of compiler warnings.

Add the "bpmelli" variable to the Stripline BPM class

Caryn's "bpmelli" variables should be added to the Stripline BPM class. These should be calculated during the ProcessEvent function, and a helicity-correlated difference should be calculated, just as is done for the positions.

This will also require a new calibration parameter representing the nominal beam spot size at that BPM.

Templated VQwSubsystemParity for Sum, Difference, Ratio to use T::operator+, T::operator- etc

Is your feature request related to a problem? Please describe.
There are a lot of redundant implementations of VQwSubsystemParity::Sum,Different,Ratio in the derived classes.

virtual void Sum(VQwSubsystem *value1, VQwSubsystem *value2) = 0;

A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
I'm always frustrated, but that's neither here nor there.

Describe the solution you'd like
These are not necessary if we inherit from e.g. VQwSubsystemParity and define

virtual void Sum(T* value1, T* value2) { *this = *value1; *this += *value2; }
//etc

Describe alternatives you've considered
Status quo is always an alternative.

Does not compile on ifarm

The code as is does not compile on the ifarm1401 node.

So far it successfully builds the evio component but gets stuck on the SVN information step. Suggest replacing SVN info with Git info (similar to remoll) and go from there.

Warn of mismatch between helicity pattern size in config files and data stream

If the size of the helicity pattern as used in the data stream is smaller than the size of the pattern the analyzer is expecting (for instance, the analyzer is trying to decode octets, but the data is in quartets), it will not provide any warnings, nor will it build valid pattern data.
We need to have a warning so we know we have a mismatch.

I'm pretty sure the contrary case will give a warning, but it should also be checked.

Enhancement: Enable github issues template for better bug reports

For remoll we enforce (strongly suggest :-) ) the following issue template, under https://github.com/JeffersonLab/japan/issues/templates/edit. I have found it to lead to more useful bug reports.

### Environment: (where does this bug occur, have you tried other environments)
- branch (master for latest released): 
- revision (HEAD for most recent): 
- OS or system: 
- Special ROOT or Geant4 versions? 

### Steps to reproduce: (give a step by step account of how to trigger the bug)
1. 
2. 

### Expected Result: (what do you expect when you execute the steps above)


### Actual Result: (what do you get when you execute the steps above)

Compilation error on ubuntu 18.10, gcc-8.2.0: overloaded ‘abs(UInt_t)' ambiguous

/home/wdconinc/git/japan/Analysis/src/QwF1TDContainer.cc: In member function ‘Bool_t QwF1TDContainer::CheckDataIntegrity(ROCID_t, UInt_t*, UInt_t)’:
/home/wdconinc/git/japan/Analysis/src/QwF1TDContainer.cc:1612:91: error: call of overloaded ‘abs(UInt_t)’ is ambiguous
      diff_trigger_time = abs( reference_trig_time-fF1TDCDecoder.GetTDCHeaderTriggerTime() );

Will fix it myself.

Add a folder to contain scripts

We should add a folder to the base directory of the japan repo to contain additional scripts, such as the pedestal calibration scripts. This could then be subdivided further as needed.

My initial proposal is "Extensions", but other callers proposed "rootscripts" or "scripts". Another name would be "Macros" (or "macros").

Transition from environment variables to parameter/configuration variables

We should look at the list of environment variables we had used, and instead create parameter/configuration variables for things like raw data directory, rootfile output directory, etc.
Decide if it is useful to keep any of the environment variables.

Then either modify or remove the scripts in SetupFiles as appropriate.

It looks like the variables used from within the exisiting JAPAn are: QW_DATA, QW_ROOTFILES, QW_PRMINPUT, QW_TMP. The variables QW_FIELDMAP, QW_LOOKUP, and QW_SEARCHTREE were used by the QwTracking analysis, and so are not needed.

Feature Request: Universal channel-wise helicity-agnostic asymmetry noise floor measurements

Is your feature request related to a problem? Please describe.
When doing pedestal studies it is nice to look at the pair-wise asymmetry noise floor. This asymmetry would not related to the helicity pattern, just each neighboring event or some helicity-agnostic patterns to check for line noise or electronic pickups. If we could have it generated for every channel automatically I think that would be nice (as long as this isn't threatening blinding I guess), and then this would be useful as a diagnostic for the health of detector channels during normal running, as well as save time for users doing dedicated pedestal studies.

Describe the solution you'd like
Add a new branch to either MPS tree or Hel tree for each regular pattern kind of asymmetry noise floor we can think to calculate

Describe alternatives you've considered
I can always just loop over the MPS tree in a macro and take the asymmetry myself, but having this be done in a running sum kind of way would allow for an online analyzer to track this number as well, and if included in the burst tree (this is minute scales right?) would allow for detection of short time scale introduction of noise (especially in detectors like the SAMs or some simple scaler which is supposed to be very stable). I'm probably asking to reinvent the wheel, as this is likely already implemented or there are probably more intelligent ways to monitor noise floors in online analysis, but I think this kind of thing would be nice to look at on various time scales without needing to adapt slower ROOT macros for each feasible application.

Additional context
We will need to do battery and empty channel pedestal studies regularly, so including this kind of feature from the ground up will make life easier and hopefully prevent mistakes in hasty analyses later on.

QwHelicityPattern, LRBCorrector, VQwDataHandler constructor initialization order results in crash

Environment: (where does this bug occur, have you tried other environments)

  • branch (master for latest released): feature-adc18-2nd-attempt
  • revision (HEAD for most recent): 8627193
  • OS or system: gcc8
  • Special ROOT or Geant4 versions? 6.16

Steps to reproduce: (give a step by step account of how to trigger the bug)

  1. gdb -ex run --args ./build/qwparity_simple -e :10 -c Parity/prminput/adc18.conf

Expected Result: (what do you expect when you execute the steps above)

Should run (or at least not crash).

Actual Result: (what do you get when you execute the steps above)

Creating subsystem of type QwBeamLine with name Beamline.
Parameter file: /home/wdconinc/git/japan/Parity/prminput/adc18_beamline.map
Variables to publish:
Parameter file: /home/wdconinc/git/japan/Parity/prminput/adc18_pedestal.map
List of published values:
Blinding parameters have been calculated.
QwBlinder::InitTestValues(): A total of 10 test values have been calculated successfully.
Warning: enable-lrbcorrection is set to false.  Skipping LoadChannelMap for LRBCorrector
terminate called after throwing an instance of 'std::out_of_range'
  what():  vector::_M_range_check: __n (which is 0) >= this->size() (which is 0)

Program received signal SIGABRT, Aborted.
__GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
50      ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.
(gdb) bt
#0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
#1  0x00007ffff6ebe535 in __GI_abort () at abort.c:79
#2  0x00007ffff7183957 in ?? () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#3  0x00007ffff7189aa6 in ?? () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#4  0x00007ffff7189ae1 in std::terminate() () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#5  0x00007ffff7189d14 in __cxa_throw () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#6  0x00007ffff7185855 in ?? () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#7  0x00007ffff7f367fd in std::vector<VQwDataHandler::EQwRegType, std::allocator<VQwDataHandler::EQwRegType> >::_M_range_check (__n=<optimized out>, 
    this=0x7fffffffc828) at /usr/include/c++/8/bits/char_traits.h:285
#8  std::vector<VQwDataHandler::EQwRegType, std::allocator<VQwDataHandler::EQwRegType> >::at (__n=<optimized out>, this=0x7fffffffc828)
    at /usr/include/c++/8/bits/stl_vector.h:981
#9  VQwDataHandler::ConnectChannels (this=this@entry=0x7fffffffc7e0, asym=..., diff=...) at /home/wdconinc/git/japan/Parity/src/VQwDataHandler.cc:68
#10 0x00007ffff7e3c666 in LRBCorrector::ConnectChannels (this=0x7fffffffc7e0, asym=..., diff=...) at /home/wdconinc/git/japan/Parity/src/LRBCorrector.cc:146
#11 0x00007ffff7e3e48e in LRBCorrector::LRBCorrector (this=0x7fffffffc7e0, options=..., helicitypattern=..., run=...)
    at /home/wdconinc/git/japan/Parity/src/LRBCorrector.cc:53
#12 0x00007ffff7f081c5 in QwHelicityPattern::QwHelicityPattern (this=0x7fffffffa250, event=..., run=...) at /usr/include/c++/8/ext/new_allocator.h:79
#13 0x00005555555607f8 in main (argc=<optimized out>, argv=<optimized out>) at /home/wdconinc/git/japan/Parity/main/QwParity_simple.cc:116

Additional inspection: in frame 9, VQwDataHandler::ConnectChannels, you can see that we are inside the loop over dv from 0 to fDependentName.size(). You can also print the size, which is zero.

(gdb) frame 9
#9  VQwDataHandler::ConnectChannels (this=this@entry=0x7fffffffc7e0, asym=..., diff=...) at /home/wdconinc/git/japan/Parity/src/VQwDataHandler.cc:68
68          if (fDependentType.at(dv)==kRegTypeMps) {
(gdb) print dv
$1 = <optimized out>
(gdb) print fDependentName.size()
$2 = 0

So, yeah, that shouldn't happen (and probably depends on which compiler you are running).

I suspect this is caused by all this being run inside the constructor of QwHelicityPattern, which calls the constructor of LRBCorrector regress_from_LRB, which calls VQwDataHandler::ConnectChannels on variables that will be instantiated later. Probably the size() call on the not-yet-existent vector fDependentName returns uninitialized memory and therefore we end up in the loop.

Similar to the static initializer fiasco but not quite...

Suggested solution? Probably figure out what order this should be done in and make it less fragile? Not code I am familiar with.

Does not compile on Mac

On local machine with:
-- The C compiler identification is AppleClang 9.1.0.9020039
-- The CXX compiler identification is AppleClang 9.1.0.9020039
-- Found ROOT 6.12/06 in /Users/ciprian/root6/build
-- Boost version: 1.66.0
-- Found the following Boost libraries:
-- program_options
-- filesystem
-- system
-- regex

The compilation gets stuck on the first component (evio) build with (and 4 other similar errors):
/Users/ciprian/prex/japan/evio/src/THaCodaFile.C:288:12: error: case value evaluates to 2155020293, which cannot be narrowed to type 'int' [-Wc++11-narrowing]
case S_EVFILE_UNXPTDEOF :
^
/Users/ciprian/prex/japan/evio/include/evio.h:20:31: note: expanded from macro 'S_EVFILE_UNXPTDEOF'
#define S_EVFILE_UNXPTDEOF 0x80730005 /* Unexpected end of file while reading event */
^
/Users/ciprian/prex/japan/evio/src/THaCodaFile.C:285:12: error: case value evaluates to 2155020292, which cannot be narrowed to type 'int' [-Wc++11-narrowing]
case S_EVFILE_UNKOPTION :

Note that on my machine the cmake gives this warning for the boost library (probably unrelated to the above):
CMake Warning at /usr/local/Cellar/cmake/3.10.3/share/cmake/Modules/FindBoost.cmake:801 (message):
New Boost version may have incorrect or missing dependencies and imported
targets
Call Stack (most recent call first):
/usr/local/Cellar/cmake/3.10.3/share/cmake/Modules/FindBoost.cmake:907 (_Boost_COMPONENT_DEPENDENCIES)
/usr/local/Cellar/cmake/3.10.3/share/cmake/Modules/FindBoost.cmake:1542 (_Boost_MISSING_DEPENDENCIES)
CMakeLists.txt:70 (find_package)

Quickstart guide in the format of SW Carpentry

Is your feature request related to a problem? Please describe.
There is currently no easy way to get started with japan if you don't already know it well or are comfortable compiling from github.

Describe the solution you'd like
I would like a quickstart guide that I can point students to get started. It should introduce the philosophy behind the japan analyzer and demonstrate some common tasks.

Describe alternatives you've considered
Modifications to the README.md file, but that's too technical. Ideally this should be accessible to anyone without needing to log in to the farm or compile the code.

Additional context
I've created https://github.com/JeffersonLab/swcarpentry-jlab-singularity and https://github.com/JeffersonLab/swcarpentry-jlab-jupyter so I'm familiar with the concept.

Doxygen warnings

Running on macOS with doxygen 1.8.14

Probably related to things that were removed (like the VDCs):
:55: warning: unable to resolve reference to VQwSubsystemTracking' for \ref command <unknown>:55: warning: unable to resolve reference to VQwSubsystemTracking' for \ref command
:55: warning: unable to resolve reference to QwScanner' for \ref command <unknown>:55: warning: unable to resolve reference to QwDriftChamber' for \ref command
:55: warning: unable to resolve reference to QwDriftChamberHDC' for \ref command <unknown>:55: warning: unable to resolve reference to QwDriftChamberVDC' for \ref command

Confusing operator precedence in QwEventBuffer

Environment: (where does this bug occur, have you tried other environments)

  • branch (master for latest released): develop
  • revision (HEAD for most recent): HEAD
  • OS or system: c++
  • Special ROOT or Geant4 versions?

Steps to reproduce: (give a step by step account of how to trigger the bug)

The line at

tmpbank = (tmpbank)<<32 + fSubbankTag;
is

(tmpbank)<<32 + fSubbankTag

which is identical, given C++ operator precedence, to

(tmpbank)<<(32 + fSubbankTag)

This is confusing, at best.

Expected Result: (what do you expect when you execute the steps above)

Does this do the right thing? Maybe. Maybe not. Maybe no one ever noticed...

Bug report: without an event cut file, all events are cut for that subsystem

Environment: (where does this bug occur, have you tried other environments)

  • branch (master for latest released): develop
  • revision (HEAD for most recent): HEAD
  • OS or system: any
  • Special ROOT or Geant4 versions?

Steps to reproduce: (give a step by step account of how to trigger the bug)

  1. Delete a nearly empty event cuts file which has only the line "EVENTCUTS = 3" (one example is prex_sam_eventcuts.map)
  2. Analyze a run with data in that subsystem

Expected Result: (what do you expect when you execute the steps above)

You should see data being processed for those subsystem channels, because there aren't event cuts applied to them.

Actual Result: (what do you get when you execute the steps above)

All of the data for those subsystem channels is cut.

What do I want to have happen:

Make the event cut system default to not cutting the data for a subsystem.

Do we want to have subdirectories in the prminputs directory

At the 10 October meeting, we discussed if we want to have a segmentation of the prminputs directory so that all files of a given region are in a subdirectory, or to have some other suborganization of that.

We should think about what this structure would look like, before we start to implement it.

Feature Request: Generalize data handler management and combine .map files

Is your feature request related to a problem? Please describe.
There are several data handler derived classes and they use separate .map files. Also this currently hardcodes the list of handlers, so it should be configurable instead.

Describe the solution you'd like
Generalize the handling of handlers, and allow for user configuration files to determine which ones are initialized in addition to the parameters that are passed, all in one file.

Describe alternatives you've considered
We could leave it as is and just keep track of which ones are hard coded in.

Additional context
I will try to attack this, but I may get stumped pretty quickly.

Split the parameter files into an independent repo?

For maintainability reasons, it may be a good idea to import all the parameter files into a separate repo, and than possibly include that as a dependency.
If we don't include that as a dependency, we should leave a few example parameter files in this one.

We should then make sure there's a command line argument to define the parameter file path. Perhaps if the "config" argument contains a path, it would be included as the first search path for all other parameter files?

Rationale: the parameter files change much more often than the actual source code, and so their changes can clutter the real source file changes. Also, we could easily make branches or forks representing different experiments (PREX vs. MOLLER, for example).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.