Scripts
Scrips to be executed from the command line are described below:
sequencer.py
The sequencer is the main script of lstosa
. Right now can only executed
only for LST1
telescope. It triggers the whole analysis chain,
creating the analysis folders and sending jobs to the SLURM queue system.
For each run/sequence it sends a job to the working nodes.
In the analysis folders you will find several types of sequence_*
files:
sequence_*.sh
File submitted to the working nodes. It calls either the calibration_pipeline.py and the datasequence.py depending on the arguments given to the sequencer and the type of sequence/run. You can submit these jobs manually by executing
sbatch sequence_*.py
.sequencer_*.txt
DEPRECATED. Specify the subruns of a sequence/run that will be analyzed by the sequencer.
sequence_*.history
This file keeps tracks of the
execution history
of a sequence/run.sequence_*.{err,log,out}
These files are the logs of the job executed on the working nodes. You can find the output of the
lstchain
executables in thesequence_*.out
file.sequence_*.veto
This file is just a flag for a vetoed sequence/run that will not be analyzed by the sequencer.
sequence_*.closed
This file is just a flag for an already closed sequence/run that will not be analyzed by the sequencer.
Usage
Build the jobs for each run and process them for a given date
usage: sequencer.py [-h] [-c CONFIG] [-d DATE] [-s] [-t] [-v] [--no-submit]
[--no-calib] [--no-dl2]
{ST,LST1,LST2,all}
Positional Arguments
- tel_id
Possible choices: ST, LST1, LST2, all
telescope identifier LST1, LST2, ST or all.
Named Arguments
- -c, --config
Use specific config file [default configs/sequencer.cfg]
Default: /home/docs/checkouts/readthedocs.org/user_builds/lstosa/envs/v0.10.2/lib/python3.11/site-packages/osa/configs/sequencer.cfg
- -d, --date
Date (YYYY-MM-DD) of the start of the night
- -s, --simulate
Do not run, just simulate what would happen
Default: False
- -t, --test
Avoid interaction with SLURM
Default: False
- -v, --verbose
Activate debugging mode
Default: False
- --no-submit
Produce job files but do not submit them
Default: False
- --no-calib
Skip calibration sequence. Run data sequences assuming calibration products already produced (default False)
Default: False
- --no-dl2
Do not produce DL2 files (default False)
Default: False
API/References
Orchestrator script that creates and execute the calibration sequence and prepares a SLURM job array which launches the data sequences for every subrun.
Functions
|
Runs the single process for a single telescope |
|
Update the percentage of files produced of each type (calibration, DL1, DATACHECK, MUON and DL2) for every run considering the total number of subruns. |
|
Get number of files produced for a given sequence and data level. |
|
Build the status table shown by the sequencer. |
|
Update the status report table shown by the sequencer. |
|
Updates the job information from SLURM |
calibration_pipeline.py
It produces the calibration products.
Usage
usage: calibration_pipeline.py [-h] [-c CONFIG] [-d DATE] [-s] [-t] [-v]
[--prod-id PROD_ID]
[--drs4-pedestal-run DRS4_PEDESTAL_RUN]
[--pedcal-run PEDCAL_RUN]
{LST1}
Positional Arguments
- tel_id
Possible choices: LST1
Named Arguments
- -c, --config
Use specific config file [default configs/sequencer.cfg]
Default: /home/docs/checkouts/readthedocs.org/user_builds/lstosa/envs/v0.10.2/lib/python3.11/site-packages/osa/configs/sequencer.cfg
- -d, --date
Date (YYYY-MM-DD) of the start of the night
- -s, --simulate
Do not run, just simulate what would happen
Default: False
- -t, --test
Avoid interaction with SLURM
Default: False
- -v, --verbose
Activate debugging mode
Default: False
- --prod-id
Set the prod ID to define data directories
- --drs4-pedestal-run
DRS4 pedestal run number
- --pedcal-run
Calibration run number
API/References
Calibration pipeline
Script to process the pedestal and calibration runs to produce the DRS4 pedestal and charge calibration files. It pipes together the two onsite calibration scripts from lstchain.
Functions
|
Handle the two stages for creating the daily calibration products: DRS4 pedestal and charge calibration files. |
|
Create the calibration file to transform from ADC counts to photo-electrons |
|
Create a DRS4 pedestal file for baseline correction. |
|
Build the create_drs4_pedestal command. |
Build the create_calibration_file command. |
datasequence.py
It processes the raw R0 data producing the DL1 and DL2 files.
Usage
usage: datasequence.py [-h] [-c CONFIG] [-d DATE] [-s] [-t] [-v]
[--prod-id PROD_ID] [--no-dl2]
[--pedcal-file PEDCAL_FILE]
[--drs4-pedestal-file DRS4_PEDESTAL_FILE]
[--time-calib-file TIME_CALIB_FILE]
[--systematic-correction-file SYSTEMATIC_CORRECTION_FILE]
[--drive-file DRIVE_FILE] [--run-summary RUN_SUMMARY]
[--pedestal-ids-file PEDESTAL_IDS_FILE]
run_number {ST,LST1,LST2}
Positional Arguments
- run_number
Number of the run to be processed
- tel_id
Possible choices: ST, LST1, LST2
Named Arguments
- -c, --config
Use specific config file [default configs/sequencer.cfg]
Default: /home/docs/checkouts/readthedocs.org/user_builds/lstosa/envs/v0.10.2/lib/python3.11/site-packages/osa/configs/sequencer.cfg
- -d, --date
Date (YYYY-MM-DD) of the start of the night
- -s, --simulate
Do not run, just simulate what would happen
Default: False
- -t, --test
Avoid interaction with SLURM
Default: False
- -v, --verbose
Activate debugging mode
Default: False
- --prod-id
Set the prod ID to define data directories
- --no-dl2
Do not produce DL2 files (default False)
Default: False
- --pedcal-file
Path of the calibration file
- --drs4-pedestal-file
Path of the DRS4 pedestal file
- --time-calib-file
Path of the time calibration file
- --systematic-correction-file
Path of the systematic correction factor file
- --drive-file
Path of drive log file with pointing information
- --run-summary
Path of run summary file with time reference information
- --pedestal-ids-file
Path to a file containing the ids of the interleaved pedestal events
API/References
Script called from the batch scheduler to process a run.
Functions
|
Performs all the steps to process a whole run. |
|
Prepare and launch the actual lstchain script that is performing the low and high-level calibration to raw camera images. |
|
It prepares and execute the dl1 to dl2 lstchain scripts that applies the already trained RFs models to DL1 files. |
|
Prepare and launch the actual lstchain script that is performing the image cleaning considering the interleaved pedestal information and obtains shower parameters. |
|
Run datacheck script |
closer.py
Checks that all sequences are finished and completed, extract the
provenance from the prov.log
file and merge the DL1 data-check files.
It also moves the analysis products to their final destinations.
Warning
The usage of this script will be overcome by autocloser.py
.
Usage
usage: closer.py [-h] [-c CONFIG] [-d DATE] [-s] [-t] [-v] [-y]
[--seq SEQTOCLOSE] [--no-dl2]
{ST,LST1,LST2}
Positional Arguments
- tel_id
Possible choices: ST, LST1, LST2
Named Arguments
- -c, --config
Use specific config file [default configs/sequencer.cfg]
Default: /home/docs/checkouts/readthedocs.org/user_builds/lstosa/envs/v0.10.2/lib/python3.11/site-packages/osa/configs/sequencer.cfg
- -d, --date
Date (YYYY-MM-DD) of the start of the night
- -s, --simulate
Do not run, just simulate what would happen
Default: False
- -t, --test
Avoid interaction with SLURM
Default: False
- -v, --verbose
Activate debugging mode
Default: False
- -y, --yes
assume yes to all questions
Default: False
- --seq
If you only want to close a certain sequence
- --no-dl2
Do not produce DL2 files (default False)
Default: False
API/References
End-of-night script and functions. Check that everything has been processed, collect results and merge them if needed.
Functions
|
Return a bool assessing whether the sequencer has successfully finished or not. |
Ask the user whether sequences should be closed or not. |
|
|
Set of last instructions. |
|
Identify the different types of files, try to close the sequences and copy output files to corresponding data directories. |
|
Check that all sequences are finished. |
|
Extract provenance run wise from the prov.log file where it was stored sub-run wise |
|
Merge every DL1 datacheck h5 files run-wise and generate the PDF files |
Write the analysis report to the closer file. |
|
|
Merge DL1b or DL2 h5 files run-wise. |
|
Run daily dl1 checks using longterm script. |
|
Build the daily longterm command. |
|
We consider the observation as finished if it is later than 08:00 UTC of the next day set by options.date |
provprocess.py
Extract the provenance information logged in to the prov.log
file.
It is executed within closer.py. It produces the provenance graphs
and .json
files run-wise.
Usage
usage: provprocess.py [-h] [-c CONFIG] [-f FILTER] [-q]
drs4_pedestal_run_id pedcal_run_id run date prod_id
Positional Arguments
- drs4_pedestal_run_id
Number of the drs4_pedestal used in the calibration
- pedcal_run_id
Number of the used pedcal used in the calibration
- run
Number of the run whose provenance is to be extracted
- date
Observation starting date YYYYMMDD
- prod_id
Production ID
Named Arguments
- -c, --config
use specific config file [default configs/sequencer.cfg]
Default: /home/docs/checkouts/readthedocs.org/user_builds/lstosa/envs/v0.10.2/lib/python3.11/site-packages/osa/configs/sequencer.cfg
- -f, --filter
filter by process granularity [calibration, r0_to_dl1 or dl1_to_dl2]
Default: “”
- -q
use this flag to reset session and remove log file
Default: False
API/References
Provenance post processing script for OSA pipeline.
Functions
|
Copy file used in process. |
|
Filter content in log file to produce a run/process wise session log. |
|
Process provenance info to reduce session at run/process wise scope. |
|
Create run-wise provenance products as JSON logs and graphs according to granularity. |
copy_datacheck.py
Copy the calibration and DL1 data-check files to the datacheck web server.
Usage
usage: copy_datacheck.py [-h] [-c CONFIG] [-d DATE] [-s] [-t] [-v]
{ST,LST1,LST2}
Positional Arguments
- tel_id
Possible choices: ST, LST1, LST2
Named Arguments
- -c, --config
Use specific config file [default configs/sequencer.cfg]
Default: /home/docs/checkouts/readthedocs.org/user_builds/lstosa/envs/v0.10.2/lib/python3.11/site-packages/osa/configs/sequencer.cfg
- -d, --date
Date (YYYY-MM-DD) of the start of the night
- -s, --simulate
Do not run, just simulate what would happen
Default: False
- -t, --test
Avoid interaction with SLURM
Default: False
- -v, --verbose
Activate debugging mode
Default: False
API/References
Script to copy analysis products to datacheck webserver creating new directories whenever they are needed.
Functions
|
Check if all files of a given data type are copied. |
|
|
|
Copy files to the webserver in host. |
|
Returns the path to the datacheck directory given the data type. |
|
Function to change from YYYY-MM-DD to YYYYMMDD format (used for directories). |
|
Create final destination directory for each data level. |
|
Return a list of files matching the pattern. |
Get the run sequence processed list for the given date by globbing the run-wise DL1 files. |
|
|
Copy datacheck products to the webserver. |
|
Creates a logger with a customized formatted handler. |
simulate_processing.py
It simulates the processing of the data sequence, generating the
provenance products in the prov.log
file.
Usage
usage: simulate_processing.py [-h] [-c CONFIG] [-p] [--force] [--append]
Named Arguments
- -c, --config
use specific config file
Default: /home/docs/checkouts/readthedocs.org/user_builds/lstosa/envs/v0.10.2/lib/python3.11/site-packages/osa/configs/sequencer.cfg
- -p
produce provenance files
Default: False
- --force
force overwrite provenance files
Default: False
- --append
append provenance capture to existing prov.log file
Default: False
API/References
Simulate executions of data processing pipeline and produce provenance.
If it is not executed by tests, please run pytest –basetemp=test_osa first. It needs to have test_osa folder filled with test datasets.
python osa/scripts/simulate_processing.py
Functions
|
Parse batch templates. |
|
Set-up folder structure and check flags. |
Simulate daily processing and capture provenance. |
|
Simulate subrun processing. |
|
Tear down created temporal folders. |