wiki:setuppathena

Version 12 (modified by /O=GRID-FR/C=FR/O=CNRS/OU=LPSC/CN=Fabian Lambert, 15 years ago) ( diff )

--

Configuration d'athena au LPSC

Préparation de l'environnement athena

  • Si ce n'est pas encore fait, préparez votre environnement comme indiqué ici

La variable d'environnement PATHENA_GRID_SETUP_SH

  • Cette variable indique au module pathena l'emplacement du fichier de configuration de l'environnement grille.
    Sur l'UI du LPSC, ce fichier se trouve sous /etc/profile.d/env.sh. Il faut donc ajouter une ligne a la fin du fichier de requirements de CMT.
    sh-3.00$ cd cmthome/
    sh-3.00$ ls
    cleanup.csh  cleanup.sh  Makefile requirements setup.csh  setup.sh
    sh-3.00$ vi requirements
    set CMTSITE STANDALONE
    set SITEROOT /swareas/atls/prod/releases/rel_14-5
    macro ATLAS_DIST_AREA ${SITEROOT}
    
    macro SITE_PROJECT_AREA ${SITEROOT}
    macro EXTERNAL_PROJECT_AREA ${SITEROOT}
    
    #apply_tag overrideConfig
    #apply_tag noCVSROOT 
    apply_tag oneTest
    apply_tag setup
    apply_tag cmt
    apply_tag CMTsetup
    apply_tag 32
    
    macro ATLAS_TEST_AREA ${HOME}/testarea/14.2.21
    macro ATLAS_SETTINGS_AREA "$(ATLAS_SETTINGS_AREA)"
    use AtlasLogin AtlasLogin-* $(ATLAS_DIST_AREA)
    set CVSROOT :pserver:anonymous@isscvs.cern.ch:/atlascvs
    set PATHENA_GRID_SETUP_SH /etc/profile.d/env.sh
    
    Initialisez ensuite l'environnement athena comme d'habitude.

Obtenir le module pathena

  • Si pathena n'est pas installé par votre administrateur sur l'UI, vous devez l'installez localement sur votre compte.
    Téléchargez l'archive tar.gz en suivant les instructions données ici.

Obtenir de l'aide sur la commande pathena

  • Placez vous ensuite dans le repertoire run de votre package d'analyse.
    Vous pouvez obtenir de l'aide sur la commande pathena en tapant la commande suivante
    sh-3.00$ pathena --help
    Usage: pathena [options] <jobOption1.py> [<jobOption2.py> [...]]
    
    'pathena --help' prints a summary of the options
    
    Options:
      -h, --help            show this help message and exit
      --split=SPLIT         Number of sub-jobs to which a job is split
      --nFilesPerJob=NFILESPERJOB
                            Number of files on which each sub-job runs
      --nEventsPerJob=NEVENTSPERJOB
                            Number of events on which each sub-job runs
      --nEventsPerFile=NEVENTSPERFILE
                            Number of events per file
      --site=SITE           Site name where jobs are sent
                            (default:ANALY_BNL_ATLAS_1
      --inDS=INDS           Name of an input dataset
      --minDS=MINDS         Dataset name for minimum bias stream
      --nMin=NMIN           Number of minimum bias files per one signal file
      --cavDS=CAVDS         Dataset name for cavern stream
      --nCav=NCAV           Number of cavern files per one signal file
      --libDS=LIBDS         Name of a library dataset
      --beamHaloADS=BEAMHALOADS
                            Dataset name for beam halo A-side
      --beamHaloCDS=BEAMHALOCDS
                            Dataset name for beam halo C-side
      --nBeamHaloA=NBEAMHALOA
                            Number of beam halo files for A-side per sub job
      --nBeamHaloC=NBEAMHALOC
                            Number of beam halo files for C-side per sub job
      --beamGasHDS=BEAMGASHDS
                            Dataset name for beam gas Hydrogen
      --beamGasCDS=BEAMGASCDS
                            Dataset name for beam gas Carbon
      --beamGasODS=BEAMGASODS
                            Dataset name for beam gas Oxygen
      --nBeamGasH=NBEAMGASH
                            Number of beam gas files for Hydrogen per sub job
      --nBeamGasC=NBEAMGASC
                            Number of beam gas files for Carbon per sub job
    
    
      --nBeamGasO=NBEAMGASO
                            Number of beam gas files for Oxygen per sub job
      --outDS=OUTDS         Name of an output dataset. OUTDS will contain all
                            output files
      --destSE=DESTSE       Destination strorage element. All outputs go to DESTSE
                            (default :%BNL_ATLAS_2)
      --nFiles=NFILES, --nfiles=NFILES
                            Use an limited number of files in the input dataset
      --nSkipFiles=NSKIPFILES
                            Skip N files in the input dataset
      -v                    Verbose
      -l, --long            Send job to a long queue
      --blong               Send build job to a long queue
      --cloud=CLOUD         cloud where jobs are submitted (default:US)
      --noBuild             Skip buildJob
      --individualOutDS     Create individual output dataset for each data-type.
                            By default, all output files are added to one output
                            dataset
      --noRandom            Enter random seeds manually
      --memory=MEMORY       Required memory size
      --official            Produce official dataset
      --extFile=EXTFILE     pathena exports files with some special extensions
                            (.C, .dat, .py .xml) in the current directory. If you
                            want to add other files, specify their names, e.g.,
                            data1,root,data2.doc
      --extOutFile=EXTOUTFILE
                            define extra output files, e.g.,
                            output1.txt,output2.dat
      --supStream=SUPSTREAM
                            suppress some output streams. e.g., ESD,TAG
      --noSubmit            Don't submit jobs
      --generalInput        Read input files with general format except
                            POOL,ROOT,ByteStream
      --tmpDir=TMPDIR       Temporary directory in which an archive file is
                            created
      --shipInput           Ship input files to remote WNs
      --noLock              Don't create a lock for local database access
      --fileList=FILELIST   List of files in the input dataset to be run
      --myproxy=MYPROXY     Name of the myproxy server
      --dbRelease=DBRELEASE
                            DBRelease or CDRelease (DatasetName:FileName). e.g., d
                            do.000001.Atlas.Ideal.DBRelease.v050101:DBRelease-5.1.
                            1.tar.gz
      --addPoolFC=ADDPOOLFC
                            file names to be inserted into PoolFileCatalog.xml
                            except input files. e.g., MyCalib1.root,MyGeom2.root
      --skipScan            Skip LRC/LFC lookup at job submission
      --inputFileList=INPUTFILELIST
                            name of file which contains a list of files to be run
                            in the input dataset
      --removeFileList=REMOVEFILELIST
                            name of file which contains a list of files to be
                            removed from the input dataset
      --corCheck            Enable a checker to skip corrupted files
      --prestage            EXPERIMENTAL : Enable prestager. Make sure that you
                            are authorized
      --novoms              don't use VOMS extensions
      --useNextEvent        Set this option if your jobO uses theApp.nextEvent()
                            e.g. for G4
      --ara                 use Athena ROOT Access
      --ares                use Athena ROOT Access + PyAthena, i.e., use athena.py
                            instead of python on WNs
      --araOutFile=ARAOUTFILE
                            define output files for ARA, e.g.,
                            output1.root,output2.root
      --trf=TRF             run transformation, e.g. --trf "csc_atlfast_trf.py %IN
                            %OUT.AOD.root %OUT.ntuple.root -1 0"
      --spaceToken=SPACETOKEN
                            spacetoken for outputs. e.g., ATLASLOCALGROUPDISK
      --notSkipMissing      If input files are not read from SE, they will be
                            skipped by default. This option disables the
                            functionality
      --burstSubmit=BURSTSUBMIT
                            Please don't use this option. Only for site validation
                            by experts
      --devSrv              Please don't use this option. Only for developers to
                            use the dev panda server
      --useAIDA             use AIDA
      --inputType=INPUTTYPE
                            File type in input dataset which contains multiple
                            file types
      --mcData=MCDATA       Create a symlink with linkName to .dat which is
                            contained in input file
      --pfnList=PFNLIST     Name of file which contains a list of input PFNs.
                            Those files can be un-registered in DDM
      --useExperimental     use experimental features
      -c COMMAND            One-liner, runs before any jobOs
      -p BOOTSTRAP          location of bootstrap file
    
    

up

Note: See TracWiki for help on using the wiki.