Grab a set of daq files from RCF which cover the lifetime of the run, the luminosity range experienced, and the conditions for the production. |
bfc.C macros are located under ~starofl/bfc. Edit the submit.[Production] script to point to the daq files loaded (as above). |
The results of the previous jobs will be .tags.root files located on HPSS. Retrieve the files, set a pointer for the tags files in the Production-specific directory under ~starofl/embedding. |
mkdir embedding
cd embedding
mkdir Common
mkdir Common/lists
mkdir Common/csh
mkdir GSTAR
mkdir P06ib
mkdir P06ib/setup
cd /u/user/embedding
cp /u/starofl/embedding/getVerticesFromTags_v4.C .
cp -R /u/starofl/embedding/P06ib/EmbeddingLib_v4_noFTPC/ P06ib/
cp /u/starofl/embedding/P06ib/Embedding_sge_noFTPC.pl P06ib/
cp /u/starofl/embedding/P06ib/bfcMixer_v4_noFTPC.C P06ib/
cp /u/starofl/embedding/P06ib/submit.starofl.pl P06ib/submit.user.pl
cp /u/starofl/embedding/P06ib/setup/Piminus_101_spectra.setup P06ib/setup/
cp /u/starofl/embedding/GSTAR/phasespace_P06ib_revfullfield.kumac GSTAR/
cp /u/starofl/embedding/GSTAR/phasespace_P06ib_fullfield.kumac GSTAR/
cp /u/starofl/embedding/Common/submit_sge.pl Common/
You now have all the files need to run embedding. There are further links to make but as you are going to export them to your own cluster you need to make the links afterwards.
Alternatively you can run embedding on PDSF from your home directory. There are a number of change to make first though because the various perl scripts have some paths relating to the starofl account inside them.
For those planning to export to a remote site you should tar and/or scp the data. I would recommend tar so that you can have the original package preserved in case something goes wrong. E.g.
tar -cvf embedding.tar embedding/
scp embedding.tar remoteuser@mycluster.blah.blah:/home/remoteuser
Obviously this step is unnecessary if you intend to run from your PDSF account although you may still want to create a tar file so that you can undo any changes which are wrong.
Login to your remote cluster and extract the archive. E.gcd /home/remoteuser
tar -xvf embedding.tar
The most obvious thing you will find are a number of places inside the perl scripts where the path or location for other scripts appears in the code. These must be changed accordingly.
changes to e.g.
changes to e.g.
changes to e.g.
changes to e.g.
changes to e.g.
changes to e.g.
changes to e.g.
/dante3/starprod/daq/2005/cuProductionMinBias/FullFieldwhereas on Bham cluster it is
/star/data1/daq/2005/cuProductionMinBias/FullFieldand thus the pattern match in perl has to change in order to extract the same information. If you have a choice then choose your directory names with care!
changes to e.g.
changes to e.g.
-qoption provides the name of the queue to use, otherwise it uses the default which I did not want in this case. The other extra options are to make the environment and working diretory correct as they were not the default for us. This is very specific to each cluster. If your cluster does not have SGE then I imagine extensive changes to the part writing the job submission script would be necessary. The scripts use the ability of SGE to have job arrays of similar jobs so you would have to emulate that somehow.
chain3->SetFlags
line actually sets the same flags since Andrew and I had to change the same flags e.g. add GeantOut option after I made orginal copy
and
. This is also something that Andrew and I both changed after I made the original copy.
line!daq_dir_2005_cuPMBFF -> /dante3/starprod/daq/2005/cuProductionMinBias/FullField
daq_dir_2005_cuPMBRFF -> /dante3/starprod/daq/2005/cuProductionMinBias/ReversedFullField
daq_dir_2005_cuPMBHTFF -> /eliza5/starprod/daq/2005/cucuProductionHT/FullField/
daq_dir_2005_cuPMBHTRFF -> /eliza5/starprod/daq/2005/cucuProductionHT/ReversedFullField
tags_dir_cu_2005 -> /dante3/starprod/tags/P06ib/2005
tags_dir_cuHT_2005 -> /eliza5/starprod/embedding/tags/P06ib
data -> /eliza12/starprod/embedding/data
lists ->../Common/lists
csh-> ../Common/csh
LOG-> ../Common/LOG
You will therefore need similar links to where you store your daq files (and associated tags files) and where you want the output data to go.
That is it! Some things will probably need to be adapted to your circumstances but it should give you a good idea of what to do
Author: Lee Barnby, University of Birmingham (using starembed account)
Modified: A. Rose, Lawrence Berkeley National Laboratory (using starembed account)
Modified Birmingham Files
Upload of modified embedding infrastructure files used on Birmingham NP cluster for Cu+Cu for (anti-)Λ and K0S embedding request.
Production Management
1) Usually embedding jobs are run in "HPSS" mode so the files end up in HPSS (via FTP). To transfer them from HPSS to disk copy the perl script ~starofl/hjort/getEmbed.pl and modify as needed. This script does at least two things that are not possible with, e.g., a command line hsi command: it only gets the files needed (usually the .geant and .event files) and it changes the permissions after the transfers. Note that if you do the transfers shortly after running the jobs the files will probably still be on the HPSS disk cache and transfers will be much fast than getting the files from tapes.
2) To clean up old embedding files make your own copy of ~starofl/hjort/embedAge.pl and use as needed. Note that $accThresh determines the maximum access time in days of files that will not be deleted.
Running Embedding
This page describes how to run embedding jobs once the daq files and tags files are in place (see other page about embedding production setup).
Basics:
Embedding code is located in production specific directories: ~starofl/embedding/P0xxx. The basic job submission template is typically called submit.starofl.pl in that directory.
Jobs are usually run by user starofl but personal accounts with group starprod membership will work, too (but test first as the group starprod write permissions typically are not in place by default).
The script to submit a set of jobs is submit.[user].pl. The script should be modified to submit an embedding set from the the configuration file
~starofl/embedding/[Production]/setup/[Particle]_[set]_[ID].setup
where
[Particle] is the particle type submitted (Piminus for GEANTID=9, as set inside file)
[set] is the file set submitted (more on this later)
[ID] is the embedding request number
Test procedure:
The best way to test a particular job configuration is to run a single job in "DISK" mode (by selecting a specific daq file in your submission). In this mode all of the intermediate files, scripts, logs, etc., are saved on disk. The location will be under the "data" link in the working directory. You can then go and figure out which script failed, hack as necessary and and try to make things work...
Details: